Feb 23 15:37:19 localhost kernel: Linux version 4.18.0-372.43.1.el8_6.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc version 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC)) #1 SMP Fri Jan 27 00:24:08 EST 2023 Feb 23 15:37:19 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-db1ffa3c10bccbc8d8864d9c4464ed0bc35694450a312436ffb67ccb09c801c0/vmlinuz-4.18.0-372.43.1.el8_6.x86_64 ignition.firstboot ostree=/ostree/boot.1/rhcos/db1ffa3c10bccbc8d8864d9c4464ed0bc35694450a312436ffb67ccb09c801c0/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 Feb 23 15:37:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 23 15:37:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 23 15:37:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 23 15:37:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 23 15:37:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 23 15:37:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 23 15:37:19 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 23 15:37:19 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 23 15:37:19 localhost kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 23 15:37:19 localhost kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 23 15:37:19 localhost kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 23 15:37:19 localhost kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Feb 23 15:37:19 localhost kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Feb 23 15:37:19 localhost kernel: signal: max sigframe size: 3632 Feb 23 15:37:19 localhost kernel: BIOS-provided physical RAM map: Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffe8fff] usable Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x00000000bffe9000-0x00000000bfffffff] reserved Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000042effffff] usable Feb 23 15:37:19 localhost kernel: BIOS-e820: [mem 0x000000042f000000-0x000000043fffffff] reserved Feb 23 15:37:19 localhost kernel: NX (Execute Disable) protection: active Feb 23 15:37:19 localhost kernel: SMBIOS 2.7 present. Feb 23 15:37:19 localhost kernel: DMI: Amazon EC2 m6i.xlarge/, BIOS 1.0 10/16/2017 Feb 23 15:37:19 localhost kernel: Hypervisor detected: KVM Feb 23 15:37:19 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 23 15:37:19 localhost kernel: kvm-clock: cpu 0, msr 98801001, primary cpu clock Feb 23 15:37:19 localhost kernel: kvm-clock: using sched offset of 8443630366 cycles Feb 23 15:37:19 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 23 15:37:19 localhost kernel: tsc: Detected 2899.998 MHz processor Feb 23 15:37:19 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 23 15:37:19 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 23 15:37:19 localhost kernel: last_pfn = 0x42f000 max_arch_pfn = 0x400000000 Feb 23 15:37:19 localhost kernel: MTRR default type: write-back Feb 23 15:37:19 localhost kernel: MTRR fixed ranges enabled: Feb 23 15:37:19 localhost kernel: 00000-9FFFF write-back Feb 23 15:37:19 localhost kernel: A0000-BFFFF uncachable Feb 23 15:37:19 localhost kernel: C0000-FFFFF write-protect Feb 23 15:37:19 localhost kernel: MTRR variable ranges enabled: Feb 23 15:37:19 localhost kernel: 0 base 0000C0000000 mask 3FFFC0000000 uncachable Feb 23 15:37:19 localhost kernel: 1 disabled Feb 23 15:37:19 localhost kernel: 2 disabled Feb 23 15:37:19 localhost kernel: 3 disabled Feb 23 15:37:19 localhost kernel: 4 disabled Feb 23 15:37:19 localhost kernel: 5 disabled Feb 23 15:37:19 localhost kernel: 6 disabled Feb 23 15:37:19 localhost kernel: 7 disabled Feb 23 15:37:19 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 23 15:37:19 localhost kernel: last_pfn = 0xbffe9 max_arch_pfn = 0x400000000 Feb 23 15:37:19 localhost kernel: Using GB pages for direct mapping Feb 23 15:37:19 localhost kernel: BRK [0x98a01000, 0x98a01fff] PGTABLE Feb 23 15:37:19 localhost kernel: BRK [0x98a02000, 0x98a02fff] PGTABLE Feb 23 15:37:19 localhost kernel: BRK [0x98a03000, 0x98a03fff] PGTABLE Feb 23 15:37:19 localhost kernel: BRK [0x98a04000, 0x98a04fff] PGTABLE Feb 23 15:37:19 localhost kernel: BRK [0x98a05000, 0x98a05fff] PGTABLE Feb 23 15:37:19 localhost kernel: RAMDISK: [mem 0x2d068000-0x3282bfff] Feb 23 15:37:19 localhost kernel: ACPI: Early table checksum verification disabled Feb 23 15:37:19 localhost kernel: ACPI: RSDP 0x00000000000F8F00 000014 (v00 AMAZON) Feb 23 15:37:19 localhost kernel: ACPI: RSDT 0x00000000BFFEE180 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: WAET 0x00000000BFFEFFC0 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: SLIT 0x00000000BFFEFF40 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: APIC 0x00000000BFFEFE80 000086 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: SRAT 0x00000000BFFEFDC0 0000C0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: FACP 0x00000000BFFEFC80 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: DSDT 0x00000000BFFEEAC0 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: FACS 0x00000000000F8EC0 000040 Feb 23 15:37:19 localhost kernel: ACPI: HPET 0x00000000BFFEFC40 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: SSDT 0x00000000BFFEE280 00081F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: SSDT 0x00000000BFFEE200 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 23 15:37:19 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffeffc0-0xbffeffe7] Feb 23 15:37:19 localhost kernel: ACPI: Reserving SLIT table memory at [mem 0xbffeff40-0xbffeffab] Feb 23 15:37:19 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffefe80-0xbffeff05] Feb 23 15:37:19 localhost kernel: ACPI: Reserving SRAT table memory at [mem 0xbffefdc0-0xbffefe7f] Feb 23 15:37:19 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffefc80-0xbffefd93] Feb 23 15:37:19 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffeeac0-0xbffefc19] Feb 23 15:37:19 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xf8ec0-0xf8eff] Feb 23 15:37:19 localhost kernel: ACPI: Reserving HPET table memory at [mem 0xbffefc40-0xbffefc77] Feb 23 15:37:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0xbffee280-0xbffeea9e] Feb 23 15:37:19 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0xbffee200-0xbffee27e] Feb 23 15:37:19 localhost kernel: ACPI: Local APIC address 0xfee00000 Feb 23 15:37:19 localhost kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 23 15:37:19 localhost kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 23 15:37:19 localhost kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 23 15:37:19 localhost kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 23 15:37:19 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0xbfffffff] Feb 23 15:37:19 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x43fffffff] Feb 23 15:37:19 localhost kernel: NUMA: Initialized distance table, cnt=1 Feb 23 15:37:19 localhost kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x42effffff] -> [mem 0x00000000-0x42effffff] Feb 23 15:37:19 localhost kernel: NODE_DATA(0) allocated [mem 0x42efd2000-0x42effcfff] Feb 23 15:37:19 localhost kernel: Zone ranges: Feb 23 15:37:19 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 23 15:37:19 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 23 15:37:19 localhost kernel: Normal [mem 0x0000000100000000-0x000000042effffff] Feb 23 15:37:19 localhost kernel: Device empty Feb 23 15:37:19 localhost kernel: Movable zone start for each node Feb 23 15:37:19 localhost kernel: Early memory node ranges Feb 23 15:37:19 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 23 15:37:19 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffe8fff] Feb 23 15:37:19 localhost kernel: node 0: [mem 0x0000000100000000-0x000000042effffff] Feb 23 15:37:19 localhost kernel: Zeroed struct page in unavailable ranges: 4217 pages Feb 23 15:37:19 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000042effffff] Feb 23 15:37:19 localhost kernel: On node 0 totalpages: 4124551 Feb 23 15:37:19 localhost kernel: DMA zone: 64 pages used for memmap Feb 23 15:37:19 localhost kernel: DMA zone: 158 pages reserved Feb 23 15:37:19 localhost kernel: DMA zone: 3998 pages, LIFO batch:0 Feb 23 15:37:19 localhost kernel: DMA32 zone: 12224 pages used for memmap Feb 23 15:37:19 localhost kernel: DMA32 zone: 782313 pages, LIFO batch:63 Feb 23 15:37:19 localhost kernel: Normal zone: 52160 pages used for memmap Feb 23 15:37:19 localhost kernel: Normal zone: 3338240 pages, LIFO batch:63 Feb 23 15:37:19 localhost kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 23 15:37:19 localhost kernel: ACPI: Local APIC address 0xfee00000 Feb 23 15:37:19 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 23 15:37:19 localhost kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 23 15:37:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 23 15:37:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 23 15:37:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 23 15:37:19 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 23 15:37:19 localhost kernel: ACPI: IRQ5 used by override. Feb 23 15:37:19 localhost kernel: ACPI: IRQ9 used by override. Feb 23 15:37:19 localhost kernel: ACPI: IRQ10 used by override. Feb 23 15:37:19 localhost kernel: ACPI: IRQ11 used by override. Feb 23 15:37:19 localhost kernel: Using ACPI (MADT) for SMP configuration information Feb 23 15:37:19 localhost kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 23 15:37:19 localhost kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0x00000000-0x00000fff] Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0x000a0000-0x000effff] Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0xbffe9000-0xbfffffff] Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0xc0000000-0xdfffffff] Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0xe0000000-0xe03fffff] Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0xe0400000-0xfffbffff] Feb 23 15:37:19 localhost kernel: PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Feb 23 15:37:19 localhost kernel: [mem 0xc0000000-0xdfffffff] available for PCI devices Feb 23 15:37:19 localhost kernel: Booting paravirtualized kernel on KVM Feb 23 15:37:19 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 23 15:37:19 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 23 15:37:19 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u524288 Feb 23 15:37:19 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u524288 alloc=1*2097152 Feb 23 15:37:19 localhost kernel: pcpu-alloc: [0] 0 1 2 3 Feb 23 15:37:19 localhost kernel: kvm-guest: stealtime: cpu 0, msr 41f02d080 Feb 23 15:37:19 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Feb 23 15:37:19 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4059945 Feb 23 15:37:19 localhost kernel: Policy zone: Normal Feb 23 15:37:19 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-db1ffa3c10bccbc8d8864d9c4464ed0bc35694450a312436ffb67ccb09c801c0/vmlinuz-4.18.0-372.43.1.el8_6.x86_64 ignition.firstboot ostree=/ostree/boot.1/rhcos/db1ffa3c10bccbc8d8864d9c4464ed0bc35694450a312436ffb67ccb09c801c0/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 Feb 23 15:37:19 localhost kernel: Specific versions of hardware are certified with Red Hat Enterprise Linux 8. Please see the list of hardware certified with Red Hat Enterprise Linux 8 at https://catalog.redhat.com. Feb 23 15:37:19 localhost kernel: Memory: 3069024K/16498204K available (12293K kernel code, 5866K rwdata, 8296K rodata, 2540K init, 14320K bss, 467248K reserved, 0K cma-reserved) Feb 23 15:37:19 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 23 15:37:19 localhost kernel: ftrace: allocating 40026 entries in 157 pages Feb 23 15:37:19 localhost kernel: ftrace: allocated 157 pages with 5 groups Feb 23 15:37:19 localhost kernel: rcu: Hierarchical RCU implementation. Feb 23 15:37:19 localhost kernel: rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4. Feb 23 15:37:19 localhost kernel: Rude variant of Tasks RCU enabled. Feb 23 15:37:19 localhost kernel: Tracing variant of Tasks RCU enabled. Feb 23 15:37:19 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 23 15:37:19 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 23 15:37:19 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16 Feb 23 15:37:19 localhost kernel: random: crng done (trusting CPU's manufacturer) Feb 23 15:37:19 localhost kernel: Console: colour VGA+ 80x25 Feb 23 15:37:19 localhost kernel: printk: console [tty0] enabled Feb 23 15:37:19 localhost kernel: printk: console [ttyS0] enabled Feb 23 15:37:19 localhost kernel: ACPI: Core revision 20210604 Feb 23 15:37:19 localhost kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 23 15:37:19 localhost kernel: APIC: Switch to symmetric I/O mode setup Feb 23 15:37:19 localhost kernel: x2apic enabled Feb 23 15:37:19 localhost kernel: Switched APIC routing to physical x2apic. Feb 23 15:37:19 localhost kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x29cd4133323, max_idle_ns: 440795296220 ns Feb 23 15:37:19 localhost kernel: Calibrating delay loop (skipped) preset value.. 5799.99 BogoMIPS (lpj=2899998) Feb 23 15:37:19 localhost kernel: pid_max: default: 32768 minimum: 301 Feb 23 15:37:19 localhost kernel: LSM: Security Framework initializing Feb 23 15:37:19 localhost kernel: Yama: becoming mindful. Feb 23 15:37:19 localhost kernel: SELinux: Initializing. Feb 23 15:37:19 localhost kernel: LSM support for eBPF active Feb 23 15:37:19 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: x86/tme: enabled by BIOS Feb 23 15:37:19 localhost kernel: x86/mktme: No known encryption algorithm is supported: 0x0 Feb 23 15:37:19 localhost kernel: x86/mktme: disabled by BIOS Feb 23 15:37:19 localhost kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 23 15:37:19 localhost kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 23 15:37:19 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 23 15:37:19 localhost kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 23 15:37:19 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 23 15:37:19 localhost kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 23 15:37:19 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 23 15:37:19 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 23 15:37:19 localhost kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 23 15:37:19 localhost kernel: Freeing SMP alternatives memory: 36K Feb 23 15:37:19 localhost kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1235 Feb 23 15:37:19 localhost kernel: TSC deadline timer enabled Feb 23 15:37:19 localhost kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz (family: 0x6, model: 0x6a, stepping: 0x6) Feb 23 15:37:19 localhost kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Feb 23 15:37:19 localhost kernel: rcu: Hierarchical SRCU implementation. Feb 23 15:37:19 localhost kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 23 15:37:19 localhost kernel: smp: Bringing up secondary CPUs ... Feb 23 15:37:19 localhost kernel: x86: Booting SMP configuration: Feb 23 15:37:19 localhost kernel: .... node #0, CPUs: #1 Feb 23 15:37:19 localhost kernel: kvm-clock: cpu 1, msr 98801041, secondary cpu clock Feb 23 15:37:19 localhost kernel: kvm-guest: stealtime: cpu 1, msr 41f0ad080 Feb 23 15:37:19 localhost kernel: #2 Feb 23 15:37:19 localhost kernel: kvm-clock: cpu 2, msr 98801081, secondary cpu clock Feb 23 15:37:19 localhost kernel: kvm-guest: stealtime: cpu 2, msr 41f12d080 Feb 23 15:37:19 localhost kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 23 15:37:19 localhost kernel: #3 Feb 23 15:37:19 localhost kernel: kvm-clock: cpu 3, msr 988010c1, secondary cpu clock Feb 23 15:37:19 localhost kernel: kvm-guest: stealtime: cpu 3, msr 41f1ad080 Feb 23 15:37:19 localhost kernel: smp: Brought up 1 node, 4 CPUs Feb 23 15:37:19 localhost kernel: smpboot: Max logical packages: 1 Feb 23 15:37:19 localhost kernel: smpboot: Total of 4 processors activated (23199.98 BogoMIPS) Feb 23 15:37:19 localhost kernel: node 0 deferred pages initialised in 21ms Feb 23 15:37:19 localhost kernel: devtmpfs: initialized Feb 23 15:37:19 localhost kernel: x86/mm: Memory block size: 128MB Feb 23 15:37:19 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 23 15:37:19 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: pinctrl core: initialized pinctrl subsystem Feb 23 15:37:19 localhost kernel: NET: Registered protocol family 16 Feb 23 15:37:19 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Feb 23 15:37:19 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 23 15:37:19 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 23 15:37:19 localhost kernel: audit: initializing netlink subsys (disabled) Feb 23 15:37:19 localhost kernel: audit: type=2000 audit(1677166636.810:1): state=initialized audit_enabled=0 res=1 Feb 23 15:37:19 localhost kernel: cpuidle: using governor menu Feb 23 15:37:19 localhost kernel: ACPI: bus type PCI registered Feb 23 15:37:19 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 23 15:37:19 localhost kernel: PCI: Using configuration type 1 for base access Feb 23 15:37:19 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 23 15:37:19 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 23 15:37:19 localhost kernel: cryptd: max_cpu_qlen set to 1000 Feb 23 15:37:19 localhost kernel: ACPI: Added _OSI(Module Device) Feb 23 15:37:19 localhost kernel: ACPI: Added _OSI(Processor Device) Feb 23 15:37:19 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 23 15:37:19 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 23 15:37:19 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 23 15:37:19 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 23 15:37:19 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 23 15:37:19 localhost kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 23 15:37:19 localhost kernel: ACPI: Interpreter enabled Feb 23 15:37:19 localhost kernel: ACPI: PM: (supports S0 S4 S5) Feb 23 15:37:19 localhost kernel: ACPI: Using IOAPIC for interrupt routing Feb 23 15:37:19 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 23 15:37:19 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 23 15:37:19 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 23 15:37:19 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI EDR HPX-Type3] Feb 23 15:37:19 localhost kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 23 15:37:19 localhost kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 23 15:37:19 localhost kernel: acpiphp: Slot [3] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [4] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [5] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [6] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [7] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [8] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [9] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [10] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [11] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [12] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [13] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [14] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [15] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [16] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [17] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [18] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [19] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [20] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [21] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [22] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [23] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [24] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [25] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [26] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [27] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [28] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [29] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [30] registered Feb 23 15:37:19 localhost kernel: acpiphp: Slot [31] registered Feb 23 15:37:19 localhost kernel: PCI host bridge to bus 0000:00 Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x20043fffffff window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 23 15:37:19 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 23 15:37:19 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 23 15:37:19 localhost kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 34179 usecs Feb 23 15:37:19 localhost kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 23 15:37:19 localhost kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 23 15:37:19 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 23 15:37:19 localhost kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 23 15:37:19 localhost kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 23 15:37:19 localhost kernel: pci 0000:00:04.0: enabling Extended Tags Feb 23 15:37:19 localhost kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 23 15:37:19 localhost kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf5fff] Feb 23 15:37:19 localhost kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf6000-0xfebf7fff] Feb 23 15:37:19 localhost kernel: pci 0000:00:05.0: reg 0x18: [mem 0xfe800000-0xfe87ffff pref] Feb 23 15:37:19 localhost kernel: pci 0000:00:05.0: enabling Extended Tags Feb 23 15:37:19 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 23 15:37:19 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 23 15:37:19 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 23 15:37:19 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 23 15:37:19 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 23 15:37:19 localhost kernel: iommu: Default domain type: Passthrough Feb 23 15:37:19 localhost kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 23 15:37:19 localhost kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 23 15:37:19 localhost kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 23 15:37:19 localhost kernel: vgaarb: loaded Feb 23 15:37:19 localhost kernel: SCSI subsystem initialized Feb 23 15:37:19 localhost kernel: ACPI: bus type USB registered Feb 23 15:37:19 localhost kernel: usbcore: registered new interface driver usbfs Feb 23 15:37:19 localhost kernel: usbcore: registered new interface driver hub Feb 23 15:37:19 localhost kernel: usbcore: registered new device driver usb Feb 23 15:37:19 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Feb 23 15:37:19 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 23 15:37:19 localhost kernel: PTP clock support registered Feb 23 15:37:19 localhost kernel: EDAC MC: Ver: 3.0.0 Feb 23 15:37:19 localhost kernel: PCI: Using ACPI for IRQ routing Feb 23 15:37:19 localhost kernel: PCI: pci_cache_line_size set to 64 bytes Feb 23 15:37:19 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 23 15:37:19 localhost kernel: e820: reserve RAM buffer [mem 0xbffe9000-0xbfffffff] Feb 23 15:37:19 localhost kernel: e820: reserve RAM buffer [mem 0x42f000000-0x42fffffff] Feb 23 15:37:19 localhost kernel: NetLabel: Initializing Feb 23 15:37:19 localhost kernel: NetLabel: domain hash size = 128 Feb 23 15:37:19 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Feb 23 15:37:19 localhost kernel: NetLabel: unlabeled traffic allowed by default Feb 23 15:37:19 localhost kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 23 15:37:19 localhost kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 23 15:37:19 localhost kernel: clocksource: Switched to clocksource kvm-clock Feb 23 15:37:19 localhost kernel: VFS: Disk quotas dquot_6.6.0 Feb 23 15:37:19 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 23 15:37:19 localhost kernel: pnp: PnP ACPI init Feb 23 15:37:19 localhost kernel: pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) Feb 23 15:37:19 localhost kernel: pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active) Feb 23 15:37:19 localhost kernel: pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active) Feb 23 15:37:19 localhost kernel: pnp 00:03: Plug and Play ACPI device, IDs PNP0400 (active) Feb 23 15:37:19 localhost kernel: pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active) Feb 23 15:37:19 localhost kernel: pnp: PnP ACPI: found 5 devices Feb 23 15:37:19 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Feb 23 15:37:19 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x20043fffffff window] Feb 23 15:37:19 localhost kernel: NET: Registered protocol family 2 Feb 23 15:37:19 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Feb 23 15:37:19 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, vmalloc) Feb 23 15:37:19 localhost kernel: NET: Registered protocol family 1 Feb 23 15:37:19 localhost kernel: NET: Registered protocol family 44 Feb 23 15:37:19 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 23 15:37:19 localhost kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 23 15:37:19 localhost kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 23 15:37:19 localhost kernel: PCI: CLS 0 bytes, default 64 Feb 23 15:37:19 localhost kernel: Unpacking initramfs... Feb 23 15:37:19 localhost kernel: Freeing initrd memory: 89872K Feb 23 15:37:19 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 23 15:37:19 localhost kernel: software IO TLB: mapped [mem 0x00000000bbfe9000-0x00000000bffe9000] (64MB) Feb 23 15:37:19 localhost kernel: ACPI: bus type thunderbolt registered Feb 23 15:37:19 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x29cd4133323, max_idle_ns: 440795296220 ns Feb 23 15:37:19 localhost kernel: clocksource: Switched to clocksource tsc Feb 23 15:37:19 localhost kernel: Initialise system trusted keyrings Feb 23 15:37:19 localhost kernel: Key type blacklist registered Feb 23 15:37:19 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Feb 23 15:37:19 localhost kernel: zbud: loaded Feb 23 15:37:19 localhost kernel: pstore: using deflate compression Feb 23 15:37:19 localhost kernel: Platform Keyring initialized Feb 23 15:37:19 localhost kernel: NET: Registered protocol family 38 Feb 23 15:37:19 localhost kernel: Key type asymmetric registered Feb 23 15:37:19 localhost kernel: Asymmetric key parser 'x509' registered Feb 23 15:37:19 localhost kernel: Running certificate verification selftests Feb 23 15:37:19 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Feb 23 15:37:19 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247) Feb 23 15:37:19 localhost kernel: io scheduler mq-deadline registered Feb 23 15:37:19 localhost kernel: io scheduler kyber registered Feb 23 15:37:19 localhost kernel: io scheduler bfq registered Feb 23 15:37:19 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Feb 23 15:37:19 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Feb 23 15:37:19 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Feb 23 15:37:19 localhost kernel: ACPI: Power Button [PWRF] Feb 23 15:37:19 localhost kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1 Feb 23 15:37:19 localhost kernel: ACPI: Sleep Button [SLPF] Feb 23 15:37:19 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 23 15:37:19 localhost kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 23 15:37:19 localhost kernel: Non-volatile memory driver v1.3 Feb 23 15:37:19 localhost kernel: rdac: device handler registered Feb 23 15:37:19 localhost kernel: hp_sw: device handler registered Feb 23 15:37:19 localhost kernel: emc: device handler registered Feb 23 15:37:19 localhost kernel: alua: device handler registered Feb 23 15:37:19 localhost kernel: libphy: Fixed MDIO Bus: probed Feb 23 15:37:19 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 23 15:37:19 localhost kernel: ehci-pci: EHCI PCI platform driver Feb 23 15:37:19 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Feb 23 15:37:19 localhost kernel: ohci-pci: OHCI PCI platform driver Feb 23 15:37:19 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 23 15:37:19 localhost kernel: usbcore: registered new interface driver usbserial_generic Feb 23 15:37:19 localhost kernel: usbserial: USB Serial support registered for generic Feb 23 15:37:19 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 23 15:37:19 localhost kernel: i8042: Warning: Keylock active Feb 23 15:37:19 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 23 15:37:19 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 23 15:37:19 localhost kernel: mousedev: PS/2 mouse device common for all mice Feb 23 15:37:19 localhost kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 23 15:37:19 localhost kernel: rtc_cmos 00:00: registered as rtc0 Feb 23 15:37:19 localhost kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 23 15:37:19 localhost kernel: intel_pstate: Intel P-state driver initializing Feb 23 15:37:19 localhost kernel: unchecked MSR access error: WRMSR to 0x199 (tried to write 0x0000000000000800) at rIP: 0xffffffffaa671164 (native_write_msr+0x4/0x20) Feb 23 15:37:19 localhost kernel: Call Trace: Feb 23 15:37:19 localhost kernel: __wrmsr_on_cpu+0x33/0x40 Feb 23 15:37:19 localhost kernel: generic_exec_single+0x91/0xd0 Feb 23 15:37:19 localhost kernel: smp_call_function_single+0xc7/0xf0 Feb 23 15:37:19 localhost kernel: ? core_get_max_pstate+0x29/0x140 Feb 23 15:37:19 localhost kernel: wrmsrl_on_cpu+0x58/0x80 Feb 23 15:37:19 localhost kernel: intel_pstate_init_cpu+0xe7/0x3e0 Feb 23 15:37:19 localhost kernel: intel_cpufreq_cpu_init+0x42/0x1e0 Feb 23 15:37:19 localhost kernel: cpufreq_online+0x315/0x940 Feb 23 15:37:19 localhost kernel: cpufreq_add_dev+0x6f/0x80 Feb 23 15:37:19 localhost kernel: subsys_interface_register+0xf1/0x150 Feb 23 15:37:19 localhost kernel: ? do_early_param+0x95/0x95 Feb 23 15:37:19 localhost kernel: ? intel_pstate_setup+0x117/0x117 Feb 23 15:37:19 localhost kernel: cpufreq_register_driver+0x14c/0x290 Feb 23 15:37:19 localhost kernel: ? intel_pstate_setup+0x117/0x117 Feb 23 15:37:19 localhost kernel: intel_pstate_register_driver+0x40/0xb0 Feb 23 15:37:19 localhost kernel: intel_pstate_init+0x54f/0x697 Feb 23 15:37:19 localhost kernel: ? driver_register+0x98/0xc0 Feb 23 15:37:19 localhost kernel: ? intel_pstate_setup+0x117/0x117 Feb 23 15:37:19 localhost kernel: do_one_initcall+0x46/0x1d0 Feb 23 15:37:19 localhost kernel: ? do_early_param+0x95/0x95 Feb 23 15:37:19 localhost kernel: kernel_init_freeable+0x1b4/0x22d Feb 23 15:37:19 localhost kernel: ? rest_init+0xaa/0xaa Feb 23 15:37:19 localhost kernel: kernel_init+0xa/0xfd Feb 23 15:37:19 localhost kernel: ret_from_fork+0x1f/0x40 Feb 23 15:37:19 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Feb 23 15:37:19 localhost kernel: usbcore: registered new interface driver usbhid Feb 23 15:37:19 localhost kernel: usbhid: USB HID core driver Feb 23 15:37:19 localhost kernel: drop_monitor: Initializing network drop monitor service Feb 23 15:37:19 localhost kernel: Initializing XFRM netlink socket Feb 23 15:37:19 localhost kernel: NET: Registered protocol family 10 Feb 23 15:37:19 localhost kernel: Segment Routing with IPv6 Feb 23 15:37:19 localhost kernel: NET: Registered protocol family 17 Feb 23 15:37:19 localhost kernel: mpls_gso: MPLS GSO support Feb 23 15:37:19 localhost kernel: AVX2 version of gcm_enc/dec engaged. Feb 23 15:37:19 localhost kernel: AES CTR mode by8 optimization enabled Feb 23 15:37:19 localhost kernel: sched_clock: Marking stable (2138493405, 0)->(3609594388, -1471100983) Feb 23 15:37:19 localhost kernel: registered taskstats version 1 Feb 23 15:37:19 localhost kernel: Loading compiled-in X.509 certificates Feb 23 15:37:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: 89f84f8328e240c751c884441f2f1c1c17813dd9' Feb 23 15:37:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Feb 23 15:37:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Feb 23 15:37:19 localhost kernel: zswap: loaded using pool lzo/zbud Feb 23 15:37:19 localhost kernel: page_owner is disabled Feb 23 15:37:19 localhost kernel: Key type big_key registered Feb 23 15:37:19 localhost kernel: Key type encrypted registered Feb 23 15:37:19 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Feb 23 15:37:19 localhost kernel: ima: Allocated hash algorithm: sha256 Feb 23 15:37:19 localhost kernel: ima: No architecture policies found Feb 23 15:37:19 localhost kernel: evm: Initialising EVM extended attributes: Feb 23 15:37:19 localhost kernel: evm: security.selinux Feb 23 15:37:19 localhost kernel: evm: security.ima Feb 23 15:37:19 localhost kernel: evm: security.capability Feb 23 15:37:19 localhost kernel: evm: HMAC attrs: 0x1 Feb 23 15:37:19 localhost kernel: rtc_cmos 00:00: setting system clock to 2023-02-23 15:37:19 UTC (1677166639) Feb 23 15:37:19 localhost kernel: Freeing unused decrypted memory: 2036K Feb 23 15:37:19 localhost kernel: Freeing unused kernel image (initmem) memory: 2540K Feb 23 15:37:19 localhost kernel: Write protecting the kernel read-only data: 24576k Feb 23 15:37:19 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2012K Feb 23 15:37:19 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 1944K Feb 23 15:37:19 localhost systemd-journald[298]: Missed 4 kernel messages Feb 23 15:37:19 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input2 Feb 23 15:37:19 localhost systemd-journald[298]: Missed 6 kernel messages Feb 23 15:37:19 localhost kernel: Loading iSCSI transport class v2.0-870. Feb 23 15:37:19 localhost kernel: fuse: init (API version 7.33) Feb 23 15:37:19 localhost kernel: IPMI message handler: version 39.2 Feb 23 15:37:19 localhost kernel: ipmi device interface Feb 23 15:37:19 localhost systemd-journald[298]: Journal started Feb 23 15:37:19 localhost systemd-journald[298]: Runtime journal (/run/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 787.5M, 779.5M free. Feb 23 15:37:19 localhost systemd-modules-load[295]: Inserted module 'fuse' Feb 23 15:37:19 localhost systemd-modules-load[295]: Module 'msr' is builtin Feb 23 15:37:19 localhost systemd-modules-load[295]: Inserted module 'ipmi_devintf' Feb 23 15:37:20 localhost systemd[1]: systemd-vconsole-setup.service: Succeeded. Feb 23 15:37:20 localhost systemd[1]: Started Setup Virtual Console. Feb 23 15:37:20 localhost systemd[1]: Starting dracut ask for additional cmdline parameters... Feb 23 15:37:20 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 15:37:20 localhost systemd[1]: Started dracut ask for additional cmdline parameters. Feb 23 15:37:20 localhost systemd[1]: Starting dracut cmdline hook... Feb 23 15:37:20 localhost dracut-cmdline[325]: dracut-412.86.202301311551-0 dracut-049-203.git20220511.el8_6 Feb 23 15:37:20 localhost dracut-cmdline[325]: Using kernel command line parameters: ip=auto BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-db1ffa3c10bccbc8d8864d9c4464ed0bc35694450a312436ffb67ccb09c801c0/vmlinuz-4.18.0-372.43.1.el8_6.x86_64 ignition.firstboot ostree=/ostree/boot.1/rhcos/db1ffa3c10bccbc8d8864d9c4464ed0bc35694450a312436ffb67ccb09c801c0/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 Feb 23 15:37:20 localhost systemd[1]: Started Apply Kernel Variables. Feb 23 15:37:20 localhost systemd-journald[298]: Missed 11 kernel messages Feb 23 15:37:20 localhost kernel: iscsi: registered transport (tcp) Feb 23 15:37:20 localhost kernel: iscsi: registered transport (qla4xxx) Feb 23 15:37:20 localhost kernel: QLogic iSCSI HBA Driver Feb 23 15:37:20 localhost kernel: libcxgbi:libcxgbi_init_module: Chelsio iSCSI driver library libcxgbi v0.9.1-ko (Apr. 2015) Feb 23 15:37:20 localhost kernel: Chelsio T4-T6 iSCSI Driver cxgb4i v0.9.5-ko (Apr. 2015) Feb 23 15:37:20 localhost kernel: iscsi: registered transport (cxgb4i) Feb 23 15:37:20 localhost kernel: cnic: QLogic cnicDriver v2.5.22 (July 20, 2015) Feb 23 15:37:20 localhost kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 23 15:37:20 localhost kernel: QLogic NetXtreme II iSCSI Driver bnx2i v2.7.10.1 (Jul 16, 2014) Feb 23 15:37:20 localhost kernel: iscsi: registered transport (bnx2i) Feb 23 15:37:20 localhost kernel: iscsi: registered transport (be2iscsi) Feb 23 15:37:20 localhost kernel: In beiscsi_module_init, tt=0000000029d8d354 Feb 23 15:37:20 localhost systemd[1]: Started dracut cmdline hook. Feb 23 15:37:20 localhost systemd[1]: Starting dracut pre-udev hook... Feb 23 15:37:20 localhost systemd-journald[298]: Missed 2 kernel messages Feb 23 15:37:20 localhost kernel: device-mapper: uevent: version 1.0.3 Feb 23 15:37:20 localhost kernel: device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised: dm-devel@redhat.com Feb 23 15:37:20 localhost systemd[1]: Started dracut pre-udev hook. Feb 23 15:37:20 localhost systemd[1]: Starting udev Kernel Device Manager... Feb 23 15:37:20 localhost systemd[1]: Started udev Kernel Device Manager. Feb 23 15:37:20 localhost systemd[1]: Starting dracut pre-trigger hook... Feb 23 15:37:20 localhost dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Feb 23 15:37:20 localhost systemd[1]: Started dracut pre-trigger hook. Feb 23 15:37:20 localhost systemd[1]: Starting udev Coldplug all Devices... Feb 23 15:37:20 localhost systemd[1]: Mounting Kernel Configuration File System... Feb 23 15:37:20 localhost systemd[1]: Mounted Kernel Configuration File System. Feb 23 15:37:20 localhost systemd[1]: Started udev Coldplug all Devices. Feb 23 15:37:20 localhost systemd[1]: Starting udev Wait for Complete Device Initialization... Feb 23 15:37:20 localhost systemd-journald[298]: Missed 11 kernel messages Feb 23 15:37:20 localhost kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 23 15:37:20 localhost kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 23 15:37:20 localhost systemd-udevd[546]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:37:20 localhost systemd-journald[298]: Missed 1 kernel messages Feb 23 15:37:20 localhost kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 02:ea:92:f9:d3:f3 Feb 23 15:37:20 localhost kernel: nvme nvme0: pci function 0000:00:04.0 Feb 23 15:37:20 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 23 15:37:20 localhost systemd-udevd[547]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:37:20 localhost systemd-journald[298]: Missed 1 kernel messages Feb 23 15:37:20 localhost kernel: ena 0000:00:05.0 ens5: renamed from eth0 Feb 23 15:37:20 localhost kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 23 15:37:20 localhost kernel: nvme0n1: detected capacity change from 0 to 128849018880 Feb 23 15:37:20 localhost systemd-udevd[547]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:37:21 localhost systemd-journald[298]: Missed 1 kernel messages Feb 23 15:37:21 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 23 15:37:21 localhost kernel: GPT:33554431 != 251658239 Feb 23 15:37:21 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Feb 23 15:37:21 localhost kernel: GPT:33554431 != 251658239 Feb 23 15:37:21 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Feb 23 15:37:21 localhost kernel: nvme0n1: p1 p2 p3 p4 Feb 23 15:37:21 localhost systemd[1]: Found device Amazon Elastic Block Store boot. Feb 23 15:37:21 localhost systemd[1]: Started udev Wait for Complete Device Initialization. Feb 23 15:37:21 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 23 15:37:21 localhost systemd[1]: Starting Ensure filesystem labeled `boot` is unique... Feb 23 15:37:21 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 23 15:37:21 localhost systemd[1]: Starting Open-iSCSI... Feb 23 15:37:21 localhost multipathd[601]: --------start up-------- Feb 23 15:37:21 localhost multipathd[601]: read /etc/multipath.conf Feb 23 15:37:21 localhost multipathd[601]: /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 15:37:21 localhost multipathd[601]: You can run "/sbin/mpathconf --enable" to create Feb 23 15:37:21 localhost multipathd[601]: /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 15:37:21 localhost multipathd[601]: path checkers start up Feb 23 15:37:21 localhost multipathd[601]: /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 15:37:21 localhost multipathd[601]: You can run "/sbin/mpathconf --enable" to create Feb 23 15:37:21 localhost multipathd[601]: /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 15:37:21 localhost iscsid[604]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 23 15:37:21 localhost iscsid[604]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 23 15:37:21 localhost iscsid[604]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 23 15:37:21 localhost iscsid[604]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 23 15:37:21 localhost iscsid[604]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 23 15:37:21 localhost systemd[1]: Started Open-iSCSI. Feb 23 15:37:21 localhost systemd[1]: Started Ensure filesystem labeled `boot` is unique. Feb 23 15:37:21 localhost coreos-gpt-setup[612]: Randomizing disk GUID Feb 23 15:37:21 localhost systemd[1]: Starting Generate New UUID For Boot Disk GPT... Feb 23 15:37:21 localhost systemd-journald[298]: Missed 24 kernel messages Feb 23 15:37:21 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 23 15:37:21 localhost kernel: GPT:33554431 != 251658239 Feb 23 15:37:21 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Feb 23 15:37:21 localhost kernel: GPT:33554431 != 251658239 Feb 23 15:37:21 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Feb 23 15:37:21 localhost kernel: nvme0n1: p1 p2 p3 p4 Feb 23 15:37:21 localhost kernel: nvme0n1: p1 p2 p3 p4 Feb 23 15:37:22 localhost coreos-gpt-setup[620]: The operation has completed successfully. Feb 23 15:37:22 localhost kernel: nvme0n1: p1 p2 p3 p4 Feb 23 15:37:22 localhost systemd[1]: Started Generate New UUID For Boot Disk GPT. Feb 23 15:37:22 localhost systemd[1]: Starting Ignition OSTree: Regenerate Filesystem UUID (boot)... Feb 23 15:37:22 localhost systemd[1]: Reached target Local File Systems (Pre). Feb 23 15:37:22 localhost systemd[1]: Reached target Local File Systems. Feb 23 15:37:22 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[717]: e2fsck 1.45.6 (20-Mar-2020) Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[717]: Pass 1: Checking inodes, blocks, and sizes Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[717]: Pass 2: Checking directory structure Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[717]: Pass 3: Checking directory connectivity Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[717]: Pass 4: Checking reference counts Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[717]: Pass 5: Checking group summary information Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[717]: boot: 323/98304 files (0.6% non-contiguous), 140556/393216 blocks Feb 23 15:37:22 localhost systemd[1]: Started Create Volatile Files and Directories. Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[730]: tune2fs 1.45.6 (20-Mar-2020) Feb 23 15:37:22 localhost systemd[1]: Reached target System Initialization. Feb 23 15:37:22 localhost systemd[1]: Reached target Basic System. Feb 23 15:37:22 localhost ignition-ostree-firstboot-uuid[712]: Regenerated UUID for /dev/disk/by-label/boot Feb 23 15:37:22 localhost systemd[1]: Started Ignition OSTree: Regenerate Filesystem UUID (boot). Feb 23 15:37:22 localhost systemd[1]: Starting CoreOS Ignition User Config Setup... Feb 23 15:37:22 localhost coreos-ignition-setup-user[735]: File /mnt/boot_partition/ignition/config.ign does not exist.. Skipping copy Feb 23 15:37:22 localhost systemd-journald[298]: Missed 20 kernel messages Feb 23 15:37:22 localhost kernel: EXT4-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: (null) Feb 23 15:37:22 localhost systemd[1]: Started CoreOS Ignition User Config Setup. Feb 23 15:37:22 localhost systemd[1]: Starting Ignition (fetch-offline)... Feb 23 15:37:22 localhost ignition[748]: Ignition 2.14.0 Feb 23 15:37:22 localhost ignition[748]: Stage: fetch-offline Feb 23 15:37:22 localhost systemd[1]: Started Ignition (fetch-offline). Feb 23 15:37:22 localhost ignition[748]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Feb 23 15:37:22 localhost systemd[1]: Starting CoreOS Enable Network... Feb 23 15:37:22 localhost ignition[748]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Feb 23 15:37:22 localhost ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 23 15:37:22 localhost ignition[748]: Ignition finished successfully Feb 23 15:37:23 localhost systemd[1]: Started CoreOS Enable Network. Feb 23 15:37:23 localhost systemd[1]: Starting Ignition (fetch)... Feb 23 15:37:23 localhost systemd[1]: Starting Copy CoreOS Firstboot Networking Config... Feb 23 15:37:23 localhost ignition[771]: Ignition 2.14.0 Feb 23 15:37:23 localhost systemd-journald[298]: Missed 13 kernel messages Feb 23 15:37:23 localhost kernel: EXT4-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: (null) Feb 23 15:37:23 localhost ignition[771]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 23 15:37:23 localhost ignition[771]: INFO : PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 23 15:37:23 localhost ignition[771]: Stage: fetch Feb 23 15:37:23 localhost coreos-copy-firstboot-network[772]: info: no files to copy from /mnt/boot_partition/coreos-firstboot-network; skipping Feb 23 15:37:23 localhost ignition[771]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Feb 23 15:37:23 localhost ignition[771]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Feb 23 15:37:23 localhost ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 23 15:37:23 localhost systemd[1]: Started Copy CoreOS Firstboot Networking Config. Feb 23 15:37:23 localhost systemd[1]: Starting dracut initqueue hook... Feb 23 15:37:23 localhost ignition[771]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #2 Feb 23 15:37:23 localhost ignition[771]: INFO : PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 23 15:37:23 localhost NetworkManager[800]: [1677166643.3396] NetworkManager (version 1.36.0-12.el8_6) is starting... (for the first time) Feb 23 15:37:23 localhost.localdomain NetworkManager[800]: [1677166643.3397] Read config: /etc/NetworkManager/NetworkManager.conf Feb 23 15:37:23 localhost.localdomain systemd-journald[298]: Missed 13 kernel messages Feb 23 15:37:23 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): ens5: link is not ready Feb 23 15:37:23 localhost.localdomain NetworkManager[800]: [1677166643.3407] auth[0x55e53bd31aa0]: create auth-manager: D-Bus connection not available. Polkit is disabled and only root will be authorized. Feb 23 15:37:23 localhost.localdomain NetworkManager[800]: [1677166643.3413] manager[0x55e53bd66020]: monitoring kernel firmware directory '/lib/firmware'. Feb 23 15:37:23 localhost.localdomain NetworkManager[800]: [1677166643.3414] hostname: hostname: hostnamed not used as proxy creation failed with: Could not connect: No such file or directory Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3415] dns-mgr[0x55e53bd5e120]: init: dns=default,systemd-resolved rc-manager=symlink Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3415] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3492] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-device-plugin-team.so) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3492] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3493] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3493] manager: Networking is enabled by state file Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3495] ifcfg-rh: dbus: don't use D-Bus for com.redhat.ifcfgrh1 service Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3495] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-settings-plugin-ifcfg-rh.so") Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3495] settings: Loaded settings plugin: keyfile (internal) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3504] dhcp-init: Using DHCP client 'internal' Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3504] device (lo): carrier: link connected Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3504] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3506] manager: (ens5): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3506] device (ens5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3633] device (ens5): carrier: link connected Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3637] sleep-monitor-sd: failed to acquire D-Bus proxy: Could not connect: No such file or directory Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3637] device (ens5): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3639] policy: auto-activating connection 'Wired Connection' (05306209-e209-4a4b-9826-35f723f6421f) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3641] device (ens5): Activation: starting connection 'Wired Connection' (05306209-e209-4a4b-9826-35f723f6421f) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3641] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3641] manager: NetworkManager state is now CONNECTING Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #3 Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3642] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3644] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : PUT result: OK Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3645] dhcp4 (ens5): activation: beginning transaction (timeout in 90 seconds) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3807] dhcp4 (ens5): state changed new lease, address=10.0.136.68 Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3808] policy: set 'Wired Connection' (ens5) as default for IPv4 routing and DNS Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: DEBUG : parsed url from cmdline: "" Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : no config URL provided Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3808] policy: set-hostname: set hostname to 'ip-10-0-136-68' (from DHCPv4) Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : PUT result: OK Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : GET result: OK Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: DEBUG : parsing config with SHA512: 22a7ae68cca0b9cf575e352076b8517ee7aa7425f1bf47648de7745e4f9a36b59e6abb34861a7dd1be4fa83a474314ca8f0a6f631c73bb8571ed2635e0a463a1 Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : Adding "root-ca" to list of CAs Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #1 Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3919] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3919] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3919] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3920] manager: NetworkManager state is now CONNECTED_SITE Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3920] device (ens5): Activation: successful, device activated. Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3920] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3921] manager: startup complete Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.3921] quitting now that startup is complete Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.6483] dhcp4 (ens5): canceled DHCP transaction Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.6483] dhcp4 (ens5): activation: beginning transaction (timeout in 90 seconds) Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.6483] dhcp4 (ens5): state changed no lease Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.6484] manager: NetworkManager state is now CONNECTED_SITE Feb 23 15:37:23 ip-10-0-136-68 NetworkManager[800]: [1677166643.6484] exiting (success) Feb 23 15:37:23 ip-10-0-136-68 systemd[1]: Started dracut initqueue hook. Feb 23 15:37:23 ip-10-0-136-68 systemd[1]: Starting dracut pre-mount hook... Feb 23 15:37:23 ip-10-0-136-68 systemd[1]: Reached target Remote File Systems (Pre). Feb 23 15:37:23 ip-10-0-136-68 systemd[1]: Reached target Remote File Systems. Feb 23 15:37:23 ip-10-0-136-68 systemd[1]: Started dracut pre-mount hook. Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #2 Feb 23 15:37:23 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:24 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #3 Feb 23 15:37:24 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:25 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #4 Feb 23 15:37:25 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:26 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #5 Feb 23 15:37:26 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:29 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #6 Feb 23 15:37:29 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:34 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #7 Feb 23 15:37:34 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:39 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #8 Feb 23 15:37:39 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:44 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #9 Feb 23 15:37:44 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:49 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #10 Feb 23 15:37:49 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:54 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #11 Feb 23 15:37:54 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:37:59 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #12 Feb 23 15:37:59 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:04 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #13 Feb 23 15:38:04 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:09 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #14 Feb 23 15:38:09 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:14 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #15 Feb 23 15:38:14 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:19 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #16 Feb 23 15:38:19 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:24 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #17 Feb 23 15:38:24 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:29 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #18 Feb 23 15:38:29 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:34 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #19 Feb 23 15:38:34 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:39 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #20 Feb 23 15:38:40 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:44 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #21 Feb 23 15:38:45 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:50 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #22 Feb 23 15:38:50 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:38:55 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #23 Feb 23 15:38:55 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:00 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #24 Feb 23 15:39:00 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:05 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #25 Feb 23 15:39:05 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:10 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #26 Feb 23 15:39:10 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:15 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #27 Feb 23 15:39:15 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:20 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #28 Feb 23 15:39:20 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:25 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #29 Feb 23 15:39:25 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:30 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #30 Feb 23 15:39:30 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:35 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #31 Feb 23 15:39:35 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:40 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #32 Feb 23 15:39:40 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:45 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #33 Feb 23 15:39:45 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:50 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #34 Feb 23 15:39:50 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:39:55 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #35 Feb 23 15:39:55 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:00 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #36 Feb 23 15:40:00 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:05 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #37 Feb 23 15:40:05 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:10 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #38 Feb 23 15:40:10 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:15 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #39 Feb 23 15:40:15 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:20 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #40 Feb 23 15:40:20 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:25 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #41 Feb 23 15:40:25 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:30 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #42 Feb 23 15:40:30 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:35 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #43 Feb 23 15:40:35 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:40 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #44 Feb 23 15:40:40 ip-10-0-136-68 ignition[771]: INFO : GET result: Internal Server Error Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: INFO : GET https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker: attempt #45 Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: INFO : GET result: OK Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: fetched referenced config at https://api-int.mnguyen-rt.devcluster.openshift.com:22623/config/worker with SHA512: e6511f58e8269b89cc1d8adc8562d92c9f7bcbfd7239e839b5fcdac3e83f5187b880a83da2631f3c4f57bba07ee642738020a874ac4ec447c254214ca092a748 Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: INFO : Adding "root-ca" to list of CAs Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: INFO : Adding "root-ca" to list of CAs Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: fetched base config from "system" Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: fetch: fetch complete Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: fetched base config from "system" Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: fetch: fetch passed Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: fetched user config from "aws" Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: Ignition finished successfully Feb 23 15:40:45 ip-10-0-136-68 ignition[771]: fetched referenced user config from "/config/worker" Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Started Ignition (fetch). Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Starting Ignition OSTree: Detect Partition Transposition... Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Starting RHCOS Check For Legacy LUKS Configuration... Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: INFO : PUT result: OK Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: Ignition 2.14.0 Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Starting Ignition (kargs)... Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: Stage: kargs Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Started RHCOS Check For Legacy LUKS Configuration. Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: INFO : Adding "root-ca" to list of CAs Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Started Ignition (kargs). Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: kargs: kargs passed Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Starting Check for FIPS mode... Feb 23 15:40:45 ip-10-0-136-68 ignition[828]: Ignition finished successfully Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Started Ignition OSTree: Detect Partition Transposition. Feb 23 15:40:45 ip-10-0-136-68 rhcos-fips[842]: Found /etc/ignition-machine-config-encapsulated.json in Ignition config Feb 23 15:40:45 ip-10-0-136-68 rhcos-fips[842]: FIPS mode not requested Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Started Check for FIPS mode. Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Starting Ignition (disks)... Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: Ignition 2.14.0 Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: Stage: disks Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: INFO : PUT result: OK Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: reading system config file "/usr/lib/ignition/base.d/00-core.ign" Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: INFO : Adding "root-ca" to list of CAs Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Started Ignition (disks). Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Starting CoreOS Ensure Unique Boot Filesystem... Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Reached target Initrd Root Device. Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: disks: disks passed Feb 23 15:40:45 ip-10-0-136-68 ignition[870]: Ignition finished successfully Feb 23 15:40:45 ip-10-0-136-68 systemd-journald[298]: Missed 192 kernel messages Feb 23 15:40:45 ip-10-0-136-68 kernel: nvme0n1: p1 p2 p3 p4 Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Started CoreOS Ensure Unique Boot Filesystem. Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Starting Ignition OSTree: Regenerate Filesystem UUID (root)... Feb 23 15:40:45 ip-10-0-136-68 ignition-ostree-firstboot-uuid[919]: Clearing log and setting UUID Feb 23 15:40:45 ip-10-0-136-68 ignition-ostree-firstboot-uuid[919]: writing all SBs Feb 23 15:40:45 ip-10-0-136-68 ignition-ostree-firstboot-uuid[919]: new UUID = c83680a9-dcc4-4413-a0a5-4681b35c650a Feb 23 15:40:45 ip-10-0-136-68 ignition-ostree-firstboot-uuid[916]: Regenerated UUID for /dev/disk/by-label/root Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Started Ignition OSTree: Regenerate Filesystem UUID (root). Feb 23 15:40:45 ip-10-0-136-68 systemd[1]: Starting Ignition OSTree: Mount (firstboot) /sysroot... Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-mount-sysroot[923]: Mounting /dev/disk/by-label/root (/dev/nvme0n1p4) to /sysroot Feb 23 15:40:46 ip-10-0-136-68 systemd-journald[298]: Missed 9 kernel messages Feb 23 15:40:46 ip-10-0-136-68 kernel: SGI XFS with ACLs, security attributes, quota, no debug enabled Feb 23 15:40:46 ip-10-0-136-68 kernel: XFS (nvme0n1p4): Mounting V5 Filesystem Feb 23 15:40:46 ip-10-0-136-68 kernel: XFS (nvme0n1p4): Ending clean mount Feb 23 15:40:46 ip-10-0-136-68 kernel: XFS (nvme0n1p4): Quotacheck needed: Please wait. Feb 23 15:40:46 ip-10-0-136-68 kernel: XFS (nvme0n1p4): Quotacheck: Done. Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: Started Ignition OSTree: Mount (firstboot) /sysroot. Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: Starting Ignition OSTree: Grow root filesystem... Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[961]: CHANGED: partition=4 start=1050624 old: size=7761920 end=8812544 new: size=250607583 end=251658207 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: meta-data=/dev/nvme0n1p4 isize=512 agcount=4, agsize=242560 blks Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: = sectsz=512 attr=2, projid32bit=1 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: = crc=1 finobt=1, sparse=1, rmapbt=0 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: = reflink=1 bigtime=0 inobtcount=0 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: data = bsize=4096 blocks=970240, imaxpct=25 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: = sunit=0 swidth=0 blks Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: log =internal log bsize=4096 blocks=2560, version=2 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: = sectsz=512 sunit=0 blks, lazy-count=1 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: realtime =none extsz=4096 blocks=0, rtextents=0 Feb 23 15:40:46 ip-10-0-136-68 ignition-ostree-growfs[1009]: data blocks changed from 970240 to 31325947 Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: Started Ignition OSTree: Grow root filesystem. Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: Starting OSTree Prepare OS/... Feb 23 15:40:46 ip-10-0-136-68 ostree-prepare-root[1023]: preparing sysroot at /sysroot Feb 23 15:40:46 ip-10-0-136-68 ostree-prepare-root[1023]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/db83d20cf09a263777fcca78594b16da00af8acc245d29cc2a1344abc3f0dac2.0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-mount-var[1033]: Mounting /sysroot/sysroot/ostree/deploy/rhcos/var Feb 23 15:40:46 ip-10-0-136-68 ostree-prepare-root[1023]: filesystem at /sysroot currently writable: 1 Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : Ignition 2.14.0 Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : Stage: mount Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : reading system config file "/usr/lib/ignition/base.d/00-core.ign" Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: DEBUG : parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 23 15:40:46 ip-10-0-136-68 ostree-prepare-root[1023]: sysroot.readonly configuration value: 1 Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : PUT result: OK Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : Adding "root-ca" to list of CAs Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : mount: mount passed Feb 23 15:40:47 ip-10-0-136-68 ignition[1041]: INFO : Ignition finished successfully Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: sysroot-ostree-deploy-rhcos-deploy-db83d20cf09a263777fcca78594b16da00af8acc245d29cc2a1344abc3f0dac2.0.mount: Succeeded. Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: Started OSTree Prepare OS/. Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: Starting Ignition OSTree: Check Root Filesystem Size... Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: Reached target Initrd Root File System. Feb 23 15:40:46 ip-10-0-136-68 systemd[1]: Starting Mount OSTree /var... Feb 23 15:40:47 ip-10-0-136-68 systemd[1]: Started Ignition OSTree: Check Root Filesystem Size. Feb 23 15:40:47 ip-10-0-136-68 systemd[1]: Started Mount OSTree /var. Feb 23 15:40:47 ip-10-0-136-68 systemd[1]: Starting Ignition (mount)... Feb 23 15:40:47 ip-10-0-136-68 systemd[1]: Started Ignition (mount). Feb 23 15:40:47 ip-10-0-136-68 systemd[1]: Starting Populate OSTree /var... Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1052]: Relabeled /sysroot//var/home from (null) to system_u:object_r:home_root_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1056]: Relabeled /sysroot//var/roothome from (null) to system_u:object_r:admin_home_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1056]: Relabeled /sysroot//var/roothome/.bash_logout from (null) to system_u:object_r:admin_home_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1056]: Relabeled /sysroot//var/roothome/.bash_profile from (null) to system_u:object_r:admin_home_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1056]: Relabeled /sysroot//var/roothome/.bashrc from (null) to system_u:object_r:admin_home_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1059]: Relabeled /sysroot//var/opt from (null) to system_u:object_r:var_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1062]: Relabeled /sysroot//var/srv from (null) to system_u:object_r:var_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal from (null) to system_u:object_r:usr_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/bin from (null) to system_u:object_r:bin_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/etc from (null) to system_u:object_r:usr_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/games from (null) to system_u:object_r:usr_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/include from (null) to system_u:object_r:usr_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/lib from (null) to system_u:object_r:lib_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/man from (null) to system_u:object_r:usr_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/sbin from (null) to system_u:object_r:bin_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/share from (null) to system_u:object_r:usr_t:s0 Feb 23 15:40:47 ip-10-0-136-68 ignition-ostree-populate-var[1065]: Relabeled /sysroot//var/usrlocal/src from (null) to system_u:object_r:usr_t:s0 Feb 23 15:40:48 ip-10-0-136-68 ignition-ostree-populate-var[1068]: Relabeled /sysroot//var/mnt from (null) to system_u:object_r:var_t:s0 Feb 23 15:40:48 ip-10-0-136-68 systemd[1]: Started Populate OSTree /var. Feb 23 15:40:48 ip-10-0-136-68 systemd[1]: Starting Ignition (files)... Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : Ignition 2.14.0 Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : Stage: files Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : reading system config file "/usr/lib/ignition/base.d/00-core.ign" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: DEBUG : parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : PUT result: OK Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : Adding "root-ca" to list of CAs Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: DEBUG : files: ensureUsers: op(1): executing: "useradd" "--root" "/sysroot" "--create-home" "--password" "*" "--comment" "CoreOS Admin" "--groups" "adm,sudo,systemd-journal,wheel" "core" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/node-sizing-enabled.env" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/node-sizing-enabled.env" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/ignition-machine-config-encapsulated.json" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/ignition-machine-config-encapsulated.json" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/mcs-machine-config-content.json" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/mcs-machine-config-content.json" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/mco/proxy.env" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: wrote ssh authorized keys file for user: core Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/mco/proxy.env" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/modules-load.d/iptables.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/modules-load.d/iptables.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/machine-config-daemon/node-annotations.json" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/machine-config-daemon/node-annotations.json" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/kubernetes/kubelet.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/kubernetes/kubelet.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/containers/storage.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/containers/storage.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/kubernetes/ca.crt" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/kubernetes/ca.crt" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/kubernetes/cloud.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/kubernetes/cloud.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/tmpfiles.d/cleanup-cni.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/tmpfiles.d/cleanup-cni.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/kubernetes/kubeconfig" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/kubernetes/kubeconfig" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/containers/policy.json" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/containers/policy.json" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/kubernetes/kubelet-ca.crt" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/kubernetes/kubelet-ca.crt" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/containers/registries.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/containers/registries.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/sysctl.d/inotify.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/sysctl.d/inotify.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/sysctl.d/forward.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/sysctl.d/forward.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system.conf.d/10-default-env-godebug.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system.conf.d/10-default-env-godebug.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" Feb 23 15:40:48 ip-10-0-136-68 systemd-journald[298]: Missed 106 kernel messages Feb 23 15:40:48 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: (null) Feb 23 15:40:48 ip-10-0-136-68 systemd[1]: Started Ignition (files). Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/etc/NetworkManager/conf.d/20-keyfiles.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/etc/NetworkManager/conf.d/20-keyfiles.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/etc/NetworkManager/conf.d/sdn.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/etc/NetworkManager/conf.d/sdn.conf" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/audit/rules.d/mco-audit-quiet-containers.rules" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/audit/rules.d/mco-audit-quiet-containers.rules" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(19): [started] writing file "/sysroot/var/usrlocal/bin/aws-kubelet-nodename" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(19): [finished] writing file "/sysroot/var/usrlocal/bin/aws-kubelet-nodename" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1a): [started] writing file "/sysroot/var/usrlocal/bin/aws-kubelet-providerid" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1a): [finished] writing file "/sysroot/var/usrlocal/bin/aws-kubelet-providerid" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1b): [started] writing file "/sysroot/var/usrlocal/bin/mco-hostname" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1b): [finished] writing file "/sysroot/var/usrlocal/bin/mco-hostname" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [started] writing file "/sysroot/var/usrlocal/bin/kubenswrapper" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [finished] writing file "/sysroot/var/usrlocal/bin/kubenswrapper" Feb 23 15:40:48 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1d): [started] writing file "/sysroot/var/lib/kubelet/config.json" Feb 23 15:40:48 ip-10-0-136-68 systemd[1]: Starting CoreOS Post Ignition Checks... Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1d): [finished] writing file "/sysroot/var/lib/kubelet/config.json" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1e): [started] writing file "/sysroot/etc/systemd/system.conf.d/kubelet-cgroups.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1e): [finished] writing file "/sysroot/etc/systemd/system.conf.d/kubelet-cgroups.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1f): [started] writing file "/sysroot/etc/crio/crio.conf.d/00-default" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(1f): [finished] writing file "/sysroot/etc/crio/crio.conf.d/00-default" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(20): [started] writing file "/sysroot/var/usrlocal/sbin/dynamic-system-reserved-calc.sh" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(20): [finished] writing file "/sysroot/var/usrlocal/sbin/dynamic-system-reserved-calc.sh" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(21): [started] writing file "/sysroot/var/usrlocal/bin/nm-clean-initrd-state.sh" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(21): [finished] writing file "/sysroot/var/usrlocal/bin/nm-clean-initrd-state.sh" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(22): [started] writing file "/sysroot/var/usrlocal/bin/configure-ovs.sh" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(22): [finished] writing file "/sysroot/var/usrlocal/bin/configure-ovs.sh" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(23): [started] writing file "/sysroot/etc/systemd/system/kubelet.service.d/20-logging.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(23): [finished] writing file "/sysroot/etc/systemd/system/kubelet.service.d/20-logging.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(24): [started] writing file "/sysroot/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" Feb 23 15:40:48 ip-10-0-136-68 systemd[1]: Starting CoreOS Boot Edit... Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(24): [finished] writing file "/sysroot/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(25): [started] writing file "/sysroot/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(25): [finished] writing file "/sysroot/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(26): [started] writing file "/sysroot/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(26): [finished] writing file "/sysroot/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(27): [started] writing file "/sysroot/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(27): [finished] writing file "/sysroot/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(28): [started] processing unit "NetworkManager-clean-initrd-state.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(28): op(29): [started] writing unit "NetworkManager-clean-initrd-state.service" at "/sysroot/etc/systemd/system/NetworkManager-clean-initrd-state.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(28): op(29): [finished] writing unit "NetworkManager-clean-initrd-state.service" at "/sysroot/etc/systemd/system/NetworkManager-clean-initrd-state.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(28): [finished] processing unit "NetworkManager-clean-initrd-state.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2a): [started] processing unit "aws-kubelet-nodename.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2a): op(2b): [started] writing unit "aws-kubelet-nodename.service" at "/sysroot/etc/systemd/system/aws-kubelet-nodename.service" Feb 23 15:40:49 ip-10-0-136-68 coreos-boot-edit[1109]: Injected kernel arguments into BLS: root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota Feb 23 15:40:48 ip-10-0-136-68 systemd[1]: Started CoreOS Post Ignition Checks. Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2a): op(2b): [finished] writing unit "aws-kubelet-nodename.service" at "/sysroot/etc/systemd/system/aws-kubelet-nodename.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2a): [finished] processing unit "aws-kubelet-nodename.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2c): [started] processing unit "aws-kubelet-providerid.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2c): op(2d): [started] writing unit "aws-kubelet-providerid.service" at "/sysroot/etc/systemd/system/aws-kubelet-providerid.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2c): op(2d): [finished] writing unit "aws-kubelet-providerid.service" at "/sysroot/etc/systemd/system/aws-kubelet-providerid.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2c): [finished] processing unit "aws-kubelet-providerid.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): [started] processing unit "crio.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): op(2f): [started] writing systemd drop-in "01-kubens.conf" at "/sysroot/etc/systemd/system/crio.service.d/01-kubens.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): op(2f): [finished] writing systemd drop-in "01-kubens.conf" at "/sysroot/etc/systemd/system/crio.service.d/01-kubens.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): op(30): [started] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/crio.service.d/10-mco-default-env.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): op(30): [finished] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/crio.service.d/10-mco-default-env.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): op(31): [started] writing systemd drop-in "10-mco-profile-unix-socket.conf" at "/sysroot/etc/systemd/system/crio.service.d/10-mco-profile-unix-socket.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): op(31): [finished] writing systemd drop-in "10-mco-profile-unix-socket.conf" at "/sysroot/etc/systemd/system/crio.service.d/10-mco-profile-unix-socket.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): op(32): [started] writing systemd drop-in "10-mco-default-madv.conf" at "/sysroot/etc/systemd/system/crio.service.d/10-mco-default-madv.conf" Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Started CoreOS Boot Edit. Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): op(32): [finished] writing systemd drop-in "10-mco-default-madv.conf" at "/sysroot/etc/systemd/system/crio.service.d/10-mco-default-madv.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(2e): [finished] processing unit "crio.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(33): [started] processing unit "docker.socket" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(33): op(34): [started] writing systemd drop-in "mco-disabled.conf" at "/sysroot/etc/systemd/system/docker.socket.d/mco-disabled.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(33): op(34): [finished] writing systemd drop-in "mco-disabled.conf" at "/sysroot/etc/systemd/system/docker.socket.d/mco-disabled.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(33): [finished] processing unit "docker.socket" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(35): [started] processing unit "kubelet-auto-node-size.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(35): op(36): [started] writing unit "kubelet-auto-node-size.service" at "/sysroot/etc/systemd/system/kubelet-auto-node-size.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(35): op(36): [finished] writing unit "kubelet-auto-node-size.service" at "/sysroot/etc/systemd/system/kubelet-auto-node-size.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(35): [finished] processing unit "kubelet-auto-node-size.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): [started] processing unit "kubelet.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): op(38): [started] writing systemd drop-in "01-kubens.conf" at "/sysroot/etc/systemd/system/kubelet.service.d/01-kubens.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): op(38): [finished] writing systemd drop-in "01-kubens.conf" at "/sysroot/etc/systemd/system/kubelet.service.d/01-kubens.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): op(39): [started] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/kubelet.service.d/10-mco-default-env.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): op(39): [finished] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/kubelet.service.d/10-mco-default-env.conf" Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Reached target Ignition Boot Disk Setup. Feb 23 15:40:49 ip-10-0-136-68 multipathd[601]: exit (signal) Feb 23 15:40:49 ip-10-0-136-68 multipathd[601]: --------shut down------- Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): op(3a): [started] writing systemd drop-in "10-mco-default-madv.conf" at "/sysroot/etc/systemd/system/kubelet.service.d/10-mco-default-madv.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): op(3a): [finished] writing systemd drop-in "10-mco-default-madv.conf" at "/sysroot/etc/systemd/system/kubelet.service.d/10-mco-default-madv.conf" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): op(3b): [started] writing unit "kubelet.service" at "/sysroot/etc/systemd/system/kubelet.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): op(3b): [finished] writing unit "kubelet.service" at "/sysroot/etc/systemd/system/kubelet.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(37): [finished] processing unit "kubelet.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(3c): [started] processing unit "kubens.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(3c): op(3d): [started] writing unit "kubens.service" at "/sysroot/etc/systemd/system/kubens.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(3c): op(3d): [finished] writing unit "kubens.service" at "/sysroot/etc/systemd/system/kubens.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(3c): [finished] processing unit "kubens.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(3e): [started] processing unit "machine-config-daemon-firstboot.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(3e): op(3f): [started] writing unit "machine-config-daemon-firstboot.service" at "/sysroot/etc/systemd/system/machine-config-daemon-firstboot.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(3e): op(3f): [finished] writing unit "machine-config-daemon-firstboot.service" at "/sysroot/etc/systemd/system/machine-config-daemon-firstboot.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(3e): [finished] processing unit "machine-config-daemon-firstboot.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(40): [started] processing unit "machine-config-daemon-pull.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(40): op(41): [started] writing unit "machine-config-daemon-pull.service" at "/sysroot/etc/systemd/system/machine-config-daemon-pull.service" Feb 23 15:40:49 ip-10-0-136-68 ignition[1073]: INFO : files: op(40): op(41): [finished] writing unit "machine-config-daemon-pull.service" at "/sysroot/etc/systemd/system/machine-config-daemon-pull.service" Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Reached target Ignition Complete. Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(40): [finished] processing unit "machine-config-daemon-pull.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(42): [started] processing unit "node-valid-hostname.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(42): op(43): [started] writing unit "node-valid-hostname.service" at "/sysroot/etc/systemd/system/node-valid-hostname.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(42): op(43): [finished] writing unit "node-valid-hostname.service" at "/sysroot/etc/systemd/system/node-valid-hostname.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(42): [finished] processing unit "node-valid-hostname.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(44): [started] processing unit "nodeip-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(44): op(45): [started] writing unit "nodeip-configuration.service" at "/sysroot/etc/systemd/system/nodeip-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(44): op(45): [finished] writing unit "nodeip-configuration.service" at "/sysroot/etc/systemd/system/nodeip-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(44): [finished] processing unit "nodeip-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(46): [started] processing unit "openvswitch.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(46): [finished] processing unit "openvswitch.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(47): [started] processing unit "ovs-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(47): op(48): [started] writing unit "ovs-configuration.service" at "/sysroot/etc/systemd/system/ovs-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(47): op(48): [finished] writing unit "ovs-configuration.service" at "/sysroot/etc/systemd/system/ovs-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(47): [finished] processing unit "ovs-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(49): [started] processing unit "ovs-vswitchd.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(49): op(4a): [started] writing systemd drop-in "10-ovs-vswitchd-restart.conf" at "/sysroot/etc/systemd/system/ovs-vswitchd.service.d/10-ovs-vswitchd-restart.conf" Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Starting Reload Configuration from the Real Root... Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(49): op(4a): [finished] writing systemd drop-in "10-ovs-vswitchd-restart.conf" at "/sysroot/etc/systemd/system/ovs-vswitchd.service.d/10-ovs-vswitchd-restart.conf" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(49): [finished] processing unit "ovs-vswitchd.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4b): [started] processing unit "ovsdb-server.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4b): op(4c): [started] writing systemd drop-in "10-ovsdb-restart.conf" at "/sysroot/etc/systemd/system/ovsdb-server.service.d/10-ovsdb-restart.conf" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4b): op(4c): [finished] writing systemd drop-in "10-ovsdb-restart.conf" at "/sysroot/etc/systemd/system/ovsdb-server.service.d/10-ovsdb-restart.conf" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4b): [finished] processing unit "ovsdb-server.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4d): [started] processing unit "pivot.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4d): op(4e): [started] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/pivot.service.d/10-mco-default-env.conf" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4d): op(4e): [finished] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/pivot.service.d/10-mco-default-env.conf" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4d): [finished] processing unit "pivot.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4f): [started] processing unit "rpm-ostreed.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4f): op(50): [started] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/rpm-ostreed.service.d/10-mco-default-env.conf" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4f): op(50): [finished] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/rpm-ostreed.service.d/10-mco-default-env.conf" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(4f): [finished] processing unit "rpm-ostreed.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(51): [started] processing unit "zincati.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(51): op(52): [started] writing systemd drop-in "mco-disabled.conf" at "/sysroot/etc/systemd/system/zincati.service.d/mco-disabled.conf" Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 15:40:50 ip-10-0-136-68 iscsid[604]: iscsid shutting down. Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(51): op(52): [finished] writing systemd drop-in "mco-disabled.conf" at "/sysroot/etc/systemd/system/zincati.service.d/mco-disabled.conf" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(51): [finished] processing unit "zincati.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(53): [started] setting preset to enabled for "machine-config-daemon-firstboot.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(53): [finished] setting preset to enabled for "machine-config-daemon-firstboot.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(54): [started] setting preset to disabled for "nodeip-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(54): op(55): [started] removing enablement symlink(s) for "nodeip-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(54): op(55): [finished] removing enablement symlink(s) for "nodeip-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(54): [finished] setting preset to disabled for "nodeip-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(56): [started] setting preset to enabled for "openvswitch.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(56): [finished] setting preset to enabled for "openvswitch.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(57): [started] setting preset to enabled for "NetworkManager-clean-initrd-state.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(57): [finished] setting preset to enabled for "NetworkManager-clean-initrd-state.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(58): [started] setting preset to enabled for "aws-kubelet-nodename.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(58): [finished] setting preset to enabled for "aws-kubelet-nodename.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(59): [started] setting preset to enabled for "aws-kubelet-providerid.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(59): [finished] setting preset to enabled for "aws-kubelet-providerid.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5a): [started] setting preset to enabled for "kubelet-auto-node-size.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5a): [finished] setting preset to enabled for "kubelet-auto-node-size.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5b): [started] setting preset to disabled for "kubens.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5b): op(5c): [started] removing enablement symlink(s) for "kubens.service" Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5b): op(5c): [finished] removing enablement symlink(s) for "kubens.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5b): [finished] setting preset to disabled for "kubens.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5d): [started] setting preset to enabled for "ovs-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5d): [finished] setting preset to enabled for "ovs-configuration.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5e): [started] setting preset to enabled for "ovsdb-server.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5e): [finished] setting preset to enabled for "ovsdb-server.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5f): [started] setting preset to enabled for "kubelet.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(5f): [finished] setting preset to enabled for "kubelet.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(60): [started] setting preset to enabled for "machine-config-daemon-pull.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(60): [finished] setting preset to enabled for "machine-config-daemon-pull.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(61): [started] setting preset to enabled for "node-valid-hostname.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(61): [finished] setting preset to enabled for "node-valid-hostname.service" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: createResultFile: createFiles: op(62): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: createResultFile: createFiles: op(62): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(63): [started] relabeling 71 patterns Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: DEBUG : files: op(63): executing: "setfiles" "-vF0" "-r" "/sysroot" "/sysroot/etc/selinux/targeted/contexts/files/file_contexts" "-f" "-" Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: op(63): [finished] relabeling 71 patterns Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : files: files passed Feb 23 15:40:50 ip-10-0-136-68 ignition[1073]: INFO : Ignition finished successfully Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: multipathd.service: Succeeded. Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 23 15:40:50 ip-10-0-136-68 dracut-pre-pivot[1225]: Feb 23 15:40:50 | /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 15:40:50 ip-10-0-136-68 dracut-pre-pivot[1225]: Feb 23 15:40:50 | You can run "/sbin/mpathconf --enable" to create Feb 23 15:40:50 ip-10-0-136-68 dracut-pre-pivot[1225]: Feb 23 15:40:50 | /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: initrd-parse-etc.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : Ignition 2.14.0 Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : Stage: umount Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/00-core.ign" Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: DEBUG : parsing config with SHA512: ff6a5153be363997e4d5d3ea8cc4048373a457c48c4a5b134a08a30aacd167c1e0f099f0bdf1e24c99ad180628cd02b767b863b5fe3a8fce3fe1886847eb8e2e Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : PUT result: OK Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : Adding "root-ca" to list of CAs Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : umount: umount passed Feb 23 15:40:50 ip-10-0-136-68 ignition[1232]: INFO : Ignition finished successfully Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Started Reload Configuration from the Real Root. Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Starting dracut mount hook... Feb 23 15:40:50 ip-10-0-136-68 coreos-teardown-initramfs[1244]: info: taking down network device: ens5 Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Reached target Initrd File Systems. Feb 23 15:40:50 ip-10-0-136-68 coreos-teardown-initramfs[1254]: RTNETLINK answers: Operation not supported Feb 23 15:40:50 ip-10-0-136-68 ignition-ostree-mount-var[1256]: Unmounting /sysroot/var Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Reached target Initrd Default Target. Feb 23 15:40:50 ip-10-0-136-68 coreos-teardown-initramfs[1244]: info: flushing all routing Feb 23 15:40:50 ip-10-0-136-68 coreos-teardown-initramfs[1244]: info: no initramfs hostname information to propagate Feb 23 15:40:50 ip-10-0-136-68 coreos-teardown-initramfs[1244]: info: no networking config is defined in the real root Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Started dracut mount hook. Feb 23 15:40:50 ip-10-0-136-68 coreos-teardown-initramfs[1244]: info: skipping propagation of default networking configs Feb 23 15:40:49 ip-10-0-136-68 systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Started dracut pre-pivot and cleanup hook. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: clevis-luks-askpass.path: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Timers. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: dracut-pre-pivot.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Initrd Default Target. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-touch-run-agetty.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Initrd Root Device. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Ignition Complete. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-post-ignition-checks.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped CoreOS Post Ignition Checks. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Ignition Boot Disk Setup. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-boot-edit.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped CoreOS Boot Edit. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Remote File Systems. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Remote File Systems (Pre). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: dracut-mount.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped dracut mount hook. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: dracut-pre-mount.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped dracut pre-mount hook. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: dracut-initqueue.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped dracut initqueue hook. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopping Open-iSCSI... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-files.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition (files). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-ostree-check-rootfs-size.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition OSTree: Check Root Filesystem Size. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-ostree-growfs.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition OSTree: Grow root filesystem. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-ostree-mount-firstboot-sysroot.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition OSTree: Mount (firstboot) /sysroot. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-ostree-uuid-root.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition OSTree: Regenerate Filesystem UUID (root). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-ostree-populate-var.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Populate OSTree /var. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopping Ignition (mount)... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: iscsid.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Open-iSCSI. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopping iSCSI UserSpace I/O driver... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: iscsiuio.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped iSCSI UserSpace I/O driver. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: initrd-cleanup.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Started Cleaning Up and Shutting Down Daemons. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-mount.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition (mount). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-ignition-unique-boot.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped CoreOS Ensure Unique Boot Filesystem. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-disks.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition (disks). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopping Ignition OSTree: Detect Partition Transposition... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: rhcos-fail-boot-for-legacy-luks-config.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped RHCOS Check For Legacy LUKS Configuration. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: rhcos-fips.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Check for FIPS mode. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-kargs.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition (kargs). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-copy-firstboot-network.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Copy CoreOS Firstboot Networking Config. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopping CoreOS Tear Down Initramfs... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopping Mount OSTree /var... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: sysroot-var.mount: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-ostree-transposefs-detect.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition OSTree: Detect Partition Transposition. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-fetch.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition (fetch). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-enable-network.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped CoreOS Enable Network. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-fetch-offline.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition (fetch-offline). Feb 23 15:40:51 ip-10-0-136-68 systemd-journald[298]: Missed 281 kernel messages Feb 23 15:40:51 ip-10-0-136-68 kernel: printk: systemd: 29 output lines suppressed due to ratelimiting Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-ignition-setup-user.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped CoreOS Ignition User Config Setup. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-ostree-uuid-boot.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ignition OSTree: Regenerate Filesystem UUID (boot). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Basic System. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Slices. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Paths. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Sockets. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: iscsiuio.socket: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Closed Open-iSCSI iscsiuio Socket. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: iscsid.socket: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Closed Open-iSCSI iscsid Socket. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target System Initialization. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: systemd-ask-password-console.path: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: systemd-udev-settle.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd-journald[298]: Missed 16 kernel messages Feb 23 15:40:51 ip-10-0-136-68 kernel: audit: type=1404 audit(1677166851.379:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped udev Wait for Complete Device Initialization. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Apply Kernel Variables. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Swap. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems (Pre). Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-gpt-setup.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Generate New UUID For Boot Disk GPT. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-unique-boot.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Ensure filesystem labeled `boot` is unique. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Load Kernel Modules. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: systemd-udev-trigger.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped udev Coldplug all Devices. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: dracut-pre-trigger.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped dracut pre-trigger hook. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopping udev Kernel Device Manager... Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: ignition-ostree-mount-var.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped Mount OSTree /var. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: coreos-teardown-initramfs.service: Succeeded. Feb 23 15:40:50 ip-10-0-136-68 systemd[1]: Stopped CoreOS Tear Down Initramfs. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: systemd-udevd.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Stopped udev Kernel Device Manager. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: kmod-static-nodes.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Stopped Create list of required static device nodes for the current kernel. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: dracut-pre-udev.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Stopped dracut pre-udev hook. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: dracut-cmdline.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Stopped dracut cmdline hook. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: afterburn-network-kargs.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Stopped Afterburn Initrd Setup Network Kernel Arguments. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: dracut-cmdline-ask.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Stopped dracut ask for additional cmdline parameters. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: systemd-udevd-kernel.socket: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Closed udev Kernel Socket. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: systemd-udevd-control.socket: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Closed udev Control Socket. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Starting Cleanup udevd DB... Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Started Cleanup udevd DB. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Reached target Switch Root. Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Starting Switch Root... Feb 23 15:40:51 ip-10-0-136-68 systemd[1]: Switching root. Feb 23 15:40:51 ip-10-0-136-68 systemd-journald[298]: Journal stopped Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-udev-settle.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped udev Wait for Complete Device Initialization. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Apply Kernel Variables. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped target Swap. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems (Pre). Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: coreos-gpt-setup.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Generate New UUID For Boot Disk GPT. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: coreos-unique-boot.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Ensure filesystem labeled `boot` is unique. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Load Kernel Modules. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-udev-trigger.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped udev Coldplug all Devices. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: dracut-pre-trigger.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped dracut pre-trigger hook. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopping udev Kernel Device Manager... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: ignition-ostree-mount-var.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Mount OSTree /var. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: coreos-teardown-initramfs.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped CoreOS Tear Down Initramfs. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-udevd.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped udev Kernel Device Manager. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: kmod-static-nodes.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Create list of required static device nodes for the current kernel. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: dracut-pre-udev.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped dracut pre-udev hook. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: dracut-cmdline.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped dracut cmdline hook. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: afterburn-network-kargs.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Afterburn Initrd Setup Network Kernel Arguments. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: dracut-cmdline-ask.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped dracut ask for additional cmdline parameters. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-udevd-kernel.socket: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Closed udev Kernel Socket. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-udevd-control.socket: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Closed udev Control Socket. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Cleanup udevd DB... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Cleanup udevd DB. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Switch Root. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Switch Root... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Switching root. Feb 23 15:40:52 ip-10-0-136-68 kernel: printk: systemd-journal: 1 output lines suppressed due to ratelimiting Feb 23 15:40:52 ip-10-0-136-68 kernel: SELinux: policy capability network_peer_controls=1 Feb 23 15:40:52 ip-10-0-136-68 kernel: SELinux: policy capability open_perms=1 Feb 23 15:40:52 ip-10-0-136-68 kernel: SELinux: policy capability extended_socket_class=1 Feb 23 15:40:52 ip-10-0-136-68 kernel: SELinux: policy capability always_check_network=0 Feb 23 15:40:52 ip-10-0-136-68 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 15:40:52 ip-10-0-136-68 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 15:40:52 ip-10-0-136-68 kernel: audit: type=1403 audit(1677166851.813:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Successfully loaded SELinux policy in 433.237ms. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 14.040ms. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd 239 (239-58.el8_6.9) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy) Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Detected virtualization kvm. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Detected architecture x86-64. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Running with unpopulated /etc. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Initializing machine ID from KVM UUID. Feb 23 15:40:52 ip-10-0-136-68 coreos-platform-chrony: Updated chrony to use aws configuration /run/coreos-platform-chrony.conf Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Populated /etc with preset unit settings. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-journald.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-journald.service: Consumed 0 CPU time Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: initrd-switch-root.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Switch Root. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: initrd-switch-root.service: Consumed 0 CPU time Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-journald.service: Service has no hold-off time (RestartSec=0), scheduling restart. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped Journal Service. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: systemd-journald.service: Consumed 0 CPU time Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Journal Service... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Listening on initctl Compatibility Named Pipe. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Create list of required static device nodes for the current kernel... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Load Kernel Modules... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped target Switch Root. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounting POSIX Message Queue File System... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Created slice User and Session Slice. Feb 23 15:40:52 ip-10-0-136-68 systemd-journald[1347]: Journal started Feb 23 15:40:52 ip-10-0-136-68 systemd-journald[1347]: Runtime journal (/run/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 787.5M, 779.5M free. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Feb 23 15:40:52 ip-10-0-136-68 systemd-modules-load[1350]: Module 'msr' is builtin Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped target Initrd Root File System. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: ostree-prepare-root.service: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped OSTree Prepare OS/. Feb 23 15:40:52 ip-10-0-136-68 systemd-modules-load[1350]: Inserted module 'ip_tables' Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: ostree-prepare-root.service: Consumed 0 CPU time Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Swap. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounting Temporary Directory (/tmp)... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Remote File Systems. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Rebuild Hardware Database... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Listening on RPCbind Server Activation Socket. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Listening on Device-mapper event daemon FIFOs. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Listening on udev Control Socket. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Created slice system-getty.slice. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounting Huge Pages File System... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Host and Network Name Lookups. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Listening on LVM2 poll daemon socket. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting CoreOS: Set printk To Level 4 (warn)... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Slices. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Create System Users... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Stopped target Initrd File Systems. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Listening on udev Kernel Socket. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting udev Coldplug all Devices... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Created slice system-sshd\x2dkeygen.slice. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Synchronize afterburn-sshkeys@.service template instances. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Forward Password Requests to Wall Directory Watch. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounting Kernel Debug File System... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target RPC Port Mapper. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Listening on Process Core Dump Socket. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Forward Password Requests to Clevis Directory Watch. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Local Encrypted Volumes (Pre). Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Local Encrypted Volumes. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Reached target Remote Encrypted Volumes. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: sysroot-usr.mount: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: sysroot-usr.mount: Consumed 0 CPU time Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: sysroot-sysroot.mount: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: sysroot-sysroot.mount: Consumed 0 CPU time Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: sysroot-etc.mount: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: sysroot-etc.mount: Consumed 0 CPU time Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: sysroot-sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: sysroot-sysroot-ostree-deploy-rhcos-var.mount: Consumed 0 CPU time Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Journal Service. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Create list of required static device nodes for the current kernel. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Load Kernel Modules. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounted POSIX Message Queue File System. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounted Temporary Directory (/tmp). Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounted Huge Pages File System. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started CoreOS: Set printk To Level 4 (warn). Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Create System Users. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounted Kernel Debug File System. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Apply Kernel Variables... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounting FUSE Control File System... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting Create Static Device Nodes in /dev... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Mounted FUSE Control File System. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Apply Kernel Variables. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Create Static Device Nodes in /dev. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started udev Coldplug all Devices. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting udev Wait for Complete Device Initialization... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started Rebuild Hardware Database. Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Starting udev Kernel Device Manager... Feb 23 15:40:52 ip-10-0-136-68 systemd[1]: Started udev Kernel Device Manager. Feb 23 15:40:52 ip-10-0-136-68 systemd-udevd[1386]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:40:52 ip-10-0-136-68 systemd-udevd[1386]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:40:52 ip-10-0-136-68 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input5 Feb 23 15:40:52 ip-10-0-136-68 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 23 15:40:52 ip-10-0-136-68 kernel: parport_pc 00:03: reported by Plug and Play ACPI Feb 23 15:40:52 ip-10-0-136-68 systemd-udevd[1386]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:40:52 ip-10-0-136-68 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 23 15:40:52 ip-10-0-136-68 kernel: ppdev: user-space parallel port driver Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started udev Wait for Complete Device Initialization. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target Local File Systems (Pre). Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: var.mount: Directory /var to mount over is not empty, mounting anyway. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Mounting /var... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Mounted /var. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting OSTree Remount OS/ Bind Mounts... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started OSTree Remount OS/ Bind Mounts. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Load/Save Random Seed... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Flush Journal to Persistent Storage... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Load/Save Random Seed. Feb 23 15:40:53 ip-10-0-136-68 systemd-journald[1347]: Time spent on flushing to /var is 7.185ms for 1483 entries. Feb 23 15:40:53 ip-10-0-136-68 systemd-journald[1347]: System journal (/var/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 4.0G, 3.9G free. Feb 23 15:40:53 ip-10-0-136-68 systemd-fsck[1424]: boot: clean, 325/98304 files, 140558/393216 blocks Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Flush Journal to Persistent Storage. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Mounting CoreOS Dynamic Mount for /boot... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Mounted CoreOS Dynamic Mount for /boot. Feb 23 15:40:53 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: (null) Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target Local File Systems. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Rebuild Journal Catalog... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Create Volatile Files and Directories... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Rebuild Dynamic Linker Cache... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Restore /run/initramfs on shutdown... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Run update-ca-trust... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Restore /run/initramfs on shutdown. Feb 23 15:40:53 ip-10-0-136-68 systemd-tmpfiles[1435]: [/usr/lib/tmpfiles.d/pkg-dbus-daemon.conf:1] Duplicate line for path "/var/lib/dbus", ignoring. Feb 23 15:40:53 ip-10-0-136-68 systemd-tmpfiles[1435]: [/usr/lib/tmpfiles.d/tmp.conf:12] Duplicate line for path "/var/tmp", ignoring. Feb 23 15:40:53 ip-10-0-136-68 systemd-tmpfiles[1435]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Feb 23 15:40:53 ip-10-0-136-68 systemd-tmpfiles[1435]: [/usr/lib/tmpfiles.d/var.conf:19] Duplicate line for path "/var/cache", ignoring. Feb 23 15:40:53 ip-10-0-136-68 systemd-tmpfiles[1435]: [/usr/lib/tmpfiles.d/var.conf:21] Duplicate line for path "/var/lib", ignoring. Feb 23 15:40:53 ip-10-0-136-68 systemd-tmpfiles[1435]: [/usr/lib/tmpfiles.d/var.conf:23] Duplicate line for path "/var/spool", ignoring. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Rebuild Journal Catalog. Feb 23 15:40:53 ip-10-0-136-68 systemd-tmpfiles[1435]: "/home" already exists and is not a directory. Feb 23 15:40:53 ip-10-0-136-68 systemd-tmpfiles[1435]: "/srv" already exists and is not a directory. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Create Volatile Files and Directories. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Security Auditing Service... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting RHEL CoreOS Rebuild SELinux Policy If Necessary... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting RHCOS Fix SELinux Labeling For /usr/local/sbin... Feb 23 15:40:53 ip-10-0-136-68 rhcos-rebuild-selinux-policy[1445]: RHEL_VERSION=8.6Checking for policy recompilation Feb 23 15:40:53 ip-10-0-136-68 chcon[1446]: changing security context of '/usr/local/sbin' Feb 23 15:40:53 ip-10-0-136-68 rhcos-rebuild-selinux-policy[1448]: -rw-r--r--. 1 root root 8914149 Jan 31 15:58 /etc/selinux/targeted/policy/policy.31 Feb 23 15:40:53 ip-10-0-136-68 rhcos-rebuild-selinux-policy[1448]: -rw-r--r--. 2 root root 8914149 Jan 1 1970 /usr/etc/selinux/targeted/policy/policy.31 Feb 23 15:40:53 ip-10-0-136-68 auditd[1453]: No plugins found, not dispatching events Feb 23 15:40:53 ip-10-0-136-68 auditd[1453]: Init complete, auditd 3.0.7 listening for events (startup state enable) Feb 23 15:40:53 ip-10-0-136-68 sh[1458]: changing security context of '/var/usrlocal/sbin' Feb 23 15:40:53 ip-10-0-136-68 sh[1459]: changing security context of '/var/usrlocal/sbin/dynamic-system-reserved-calc.sh' Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started RHCOS Fix SELinux Labeling For /usr/local/sbin. Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: No rules Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: enabled 1 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: failure 1 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: pid 1453 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: rate_limit 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_limit 8192 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: lost 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog 3 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_wait_time 60000 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_wait_time_actual 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: enabled 1 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: failure 1 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: pid 1453 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: rate_limit 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_limit 8192 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: lost 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_wait_time 60000 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_wait_time_actual 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: enabled 1 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: failure 1 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: pid 1453 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: rate_limit 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_limit 8192 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: lost 0 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog 3 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_wait_time 60000 Feb 23 15:40:53 ip-10-0-136-68 augenrules[1475]: backlog_wait_time_actual 0 Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started RHEL CoreOS Rebuild SELinux Policy If Necessary. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Security Auditing Service. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Update UTMP about System Boot/Shutdown... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Update UTMP about System Boot/Shutdown. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Rebuild Dynamic Linker Cache. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Update is Completed... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Update is Completed. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Run update-ca-trust. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target System Initialization. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Monitor console-login-helper-messages runtime issue snippets directory for changes. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started daily update of the root trust anchor for DNSSEC. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Listening on bootupd.socket. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started OSTree Monitor Staged Deployment. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target Paths. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target Sockets. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Daily Cleanup of Temporary Directories. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target Basic System. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting CoreOS Generate iSCSI Initiator Name... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Generate console-login-helper-messages issue snippet... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting CRI-O Auto Update Script... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting OpenSSH ecdsa Server Key Generation... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target Network (Pre). Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Open vSwitch Database Unit... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting OpenSSH rsa Server Key Generation... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Create Ignition Status Issue Files... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Afterburn (Metadata)... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting NTP client/server... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting OpenSSH ed25519 Server Key Generation... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started D-Bus System Message Bus. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting System Security Services Daemon... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Generation of shadow ID ranges for CRI-O... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started irqbalance daemon. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting CoreOS Mark Ignition Boot Complete... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Daily rotation of log files. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target Timers. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started CoreOS Generate iSCSI Initiator Name. Feb 23 15:40:53 ip-10-0-136-68 chown[1526]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Feb 23 15:40:53 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 15:40:53 ip-10-0-136-68 chronyd[1534]: chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sshd-keygen@ecdsa.service: Succeeded. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started OpenSSH ecdsa Server Key Generation. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sshd-keygen@ecdsa.service: Consumed 13ms CPU time Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sshd-keygen@ed25519.service: Succeeded. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started OpenSSH ed25519 Server Key Generation. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sshd-keygen@ed25519.service: Consumed 14ms CPU time Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started CoreOS Mark Ignition Boot Complete. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started NTP client/server. Feb 23 15:40:53 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:53.809 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 23 15:40:53 ip-10-0-136-68 groupadd[1552]: group added to /etc/group: name=containers, GID=995 Feb 23 15:40:53 ip-10-0-136-68 groupadd[1552]: group added to /etc/gshadow: name=containers Feb 23 15:40:53 ip-10-0-136-68 sssd[1521]: Starting up Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sssd.service: Main process exited, code=exited, status=3/NOTIMPLEMENTED Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sssd.service: Failed with result 'exit-code'. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Failed to start System Security Services Daemon. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sssd.service: Consumed 16ms CPU time Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target User and Group Name Lookups. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Login Service... Feb 23 15:40:53 ip-10-0-136-68 groupadd[1552]: new group: name=containers, GID=995 Feb 23 15:40:53 ip-10-0-136-68 systemd-logind[1603]: Watching system buttons on /dev/input/event0 (Power Button) Feb 23 15:40:53 ip-10-0-136-68 systemd-logind[1603]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 23 15:40:53 ip-10-0-136-68 systemd-logind[1603]: Watching system buttons on /dev/input/event2 (AT Translated Set 2 keyboard) Feb 23 15:40:53 ip-10-0-136-68 systemd-logind[1603]: New seat seat0. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Login Service. Feb 23 15:40:53 ip-10-0-136-68 useradd[1622]: new user: name=containers, UID=993, GID=995, home=/var/home/containers, shell=/sbin/nologin Feb 23 15:40:53 ip-10-0-136-68 ovs-ctl[1559]: /etc/openvswitch/conf.db does not exist ... (warning). Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Create Ignition Status Issue Files. Feb 23 15:40:53 ip-10-0-136-68 ovs-ctl[1559]: Creating empty database /etc/openvswitch/conf.db. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sssd.service: Service RestartSec=100ms expired, scheduling restart. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sssd.service: Scheduled restart job, restart counter is at 1. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: crio-subid.service: Succeeded. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Generation of shadow ID ranges for CRI-O. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: crio-subid.service: Consumed 70ms CPU time Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Stopped System Security Services Daemon. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sssd.service: Consumed 0 CPU time Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting System Security Services Daemon... Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sshd-keygen@rsa.service: Succeeded. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started OpenSSH rsa Server Key Generation. Feb 23 15:40:53 ip-10-0-136-68 ovs-ctl[1646]: 2023-02-23T15:40:53Z|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Feb 23 15:40:53 ip-10-0-136-68 ovsdb-server[1646]: ovs|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: sshd-keygen@rsa.service: Consumed 186ms CPU time Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Reached target sshd-keygen.target. Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Starting Generate SSH keys snippet for display via console-login-helper-messages... Feb 23 15:40:53 ip-10-0-136-68 ovs-ctl[1559]: Starting ovsdb-server. Feb 23 15:40:53 ip-10-0-136-68 sssd[1654]: Starting up Feb 23 15:40:53 ip-10-0-136-68 systemd[1]: Started Generate SSH keys snippet for display via console-login-helper-messages. Feb 23 15:40:53 ip-10-0-136-68 sssd_be[1676]: Starting up Feb 23 15:40:54 ip-10-0-136-68 ovs-vsctl[1665]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.3.0 Feb 23 15:40:54 ip-10-0-136-68 sssd_nss[1681]: Starting up Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started System Security Services Daemon. Feb 23 15:40:54 ip-10-0-136-68 ovs-ctl[1679]: 2023-02-23T15:40:54Z|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Feb 23 15:40:54 ip-10-0-136-68 ovs-vswitchd[1679]: ovs|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Feb 23 15:40:54 ip-10-0-136-68 ovs-vsctl[1689]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.17.4 "external-ids:system-id=\"4004906b-6ca5-4a32-b3c0-bdcf1c128aba\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhcos\"" "system-version=\"4.12\"" Feb 23 15:40:54 ip-10-0-136-68 ovs-ctl[1559]: Configuring Open vSwitch system IDs. Feb 23 15:40:54 ip-10-0-136-68 ovs-vsctl[1695]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=ip-10-0-136-68 Feb 23 15:40:54 ip-10-0-136-68 ovs-ctl[1559]: Enabling remote OVSDB managers. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Open vSwitch Database Unit. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Open vSwitch Delete Transient Ports... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Open vSwitch Delete Transient Ports. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Open vSwitch Forwarding Unit... Feb 23 15:40:54 ip-10-0-136-68 kernel: openvswitch: Open vSwitch switching datapath Feb 23 15:40:54 ip-10-0-136-68 ovs-ctl[1743]: Inserting openvswitch module. Feb 23 15:40:54 ip-10-0-136-68 ovs-ctl[1755]: 2023-02-23T15:40:54Z|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Feb 23 15:40:54 ip-10-0-136-68 ovs-vswitchd[1755]: ovs|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Feb 23 15:40:54 ip-10-0-136-68 ovs-ctl[1715]: Starting ovs-vswitchd. Feb 23 15:40:54 ip-10-0-136-68 crio[1498]: time="2023-02-23 15:40:54.329130208Z" level=info msg="Starting CRI-O, version: 1.25.2-4.rhaos4.12.git66af2f6.el8, git: unknown(clean)" Feb 23 15:40:54 ip-10-0-136-68 ovs-vsctl[1764]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=ip-10-0-136-68 Feb 23 15:40:54 ip-10-0-136-68 ovs-ctl[1715]: Enabling remote OVSDB managers. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Open vSwitch Forwarding Unit. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Open vSwitch... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Open vSwitch. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Network Manager... Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.3913] NetworkManager (version 1.36.0-12.el8_6) is starting... (for the first time) Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.3915] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-disable-default-plugins.conf, 20-client-id-from-mac.conf) (etc: 20-keyfiles.conf, sdn.conf) Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Network Manager. Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.3976] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Network Manager Wait Online... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Reached target Network. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting OpenSSH server daemon... Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.4097] manager[0x5568fc030000]: monitoring kernel firmware directory '/lib/firmware'. Feb 23 15:40:54 ip-10-0-136-68 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.11' (uid=0 pid=1770 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Hostname Service... Feb 23 15:40:54 ip-10-0-136-68 sshd[1774]: Server listening on 0.0.0.0 port 22. Feb 23 15:40:54 ip-10-0-136-68 sshd[1774]: Server listening on :: port 22. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started OpenSSH server daemon. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1771696940-merged.mount: Succeeded. Feb 23 15:40:54 ip-10-0-136-68 crio[1498]: time="2023-02-23 15:40:54.469909695Z" level=info msg="Checking whether cri-o should wipe containers: open /var/run/crio/version: no such file or directory" Feb 23 15:40:54 ip-10-0-136-68 crio[1498]: time="2023-02-23 15:40:54.469953685Z" level=info msg="open /var/lib/crio/version: no such file or directory: triggering wipe of images" Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: crio-wipe.service: Succeeded. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started CRI-O Auto Update Script. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: crio-wipe.service: Consumed 91ms CPU time Feb 23 15:40:54 ip-10-0-136-68 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Hostname Service. Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.4965] hostname: hostname: using hostnamed Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.4968] dns-mgr[0x5568fc00d250]: init: dns=default,systemd-resolved rc-manager=symlink Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5045] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-device-plugin-ovs.so) Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5070] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-device-plugin-team.so) Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5070] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5071] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5071] manager: Networking is enabled by state file Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5072] settings: Loaded settings plugin: keyfile (internal) Feb 23 15:40:54 ip-10-0-136-68 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.11' (uid=0 pid=1770 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5113] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-settings-plugin-ifcfg-rh.so") Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5131] dhcp-init: Using DHCP client 'internal' Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5132] device (lo): carrier: link connected Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5134] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5140] manager: (ens5): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 23 15:40:54 ip-10-0-136-68 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5204] settings: (ens5): created default wired connection 'Wired connection 1' Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5204] device (ens5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 15:40:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): ens5: link is not ready Feb 23 15:40:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): ens5: link is not ready Feb 23 15:40:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens5: link becomes ready Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5233] device (ens5): carrier: link connected Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5255] device (ens5): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5260] policy: auto-activating connection 'Wired connection 1' (eb99b8bd-8e1f-3f41-845b-962703e428f7) Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5263] device (ens5): Activation: starting connection 'Wired connection 1' (eb99b8bd-8e1f-3f41-845b-962703e428f7) Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5263] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5264] manager: NetworkManager state is now CONNECTING Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5265] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5269] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5286] dhcp4 (ens5): activation: beginning transaction (timeout in 45 seconds) Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5311] dhcp4 (ens5): state changed new lease, address=10.0.136.68 Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5313] policy: set 'Wired connection 1' (ens5) as default for IPv4 routing and DNS Feb 23 15:40:54 ip-10-0-136-68 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.resolve1' unit='dbus-org.freedesktop.resolve1.service' requested by ':1.11' (uid=0 pid=1770 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 15:40:54 ip-10-0-136-68 dbus-daemon[1517]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found. Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.5328] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1796]: Error: Device '' not found. Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1807]: Error: Device '' not found. Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + INTERFACE_NAME=ens5 Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + OPERATION=pre-up Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + '[' pre-up '!=' pre-up ']' Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1826]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1827]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + INTERFACE_CONNECTION_UUID=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + '[' eb99b8bd-8e1f-3f41-845b-962703e428f7 == '' ']' Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1832]: ++ nmcli -t -f connection.slave-type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1833]: ++ awk -F : '{print $NF}' Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + INTERFACE_OVS_SLAVE_TYPE= Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + '[' '' '!=' ovs-port ']' Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1824]: + exit 0 Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.6069] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.6070] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.6072] manager: NetworkManager state is now CONNECTED_SITE Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.6073] device (ens5): Activation: successful, device activated. Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.6075] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 15:40:54 ip-10-0-136-68 NetworkManager[1770]: [1677166854.6079] manager: startup complete Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Network Manager Wait Online. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Wait for a non-localhost hostname... Feb 23 15:40:54 ip-10-0-136-68 mco-hostname[1841]: waiting for non-localhost hostname to be assigned Feb 23 15:40:54 ip-10-0-136-68 mco-hostname[1841]: node identified as ip-10-0-136-68 Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Wait for a non-localhost hostname. Feb 23 15:40:54 ip-10-0-136-68 nm-dispatcher[1874]: Error: Device '' not found. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Generate console-login-helper-messages issue snippet. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Consumed 10ms CPU time Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Permit User Sessions... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Permit User Sessions. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Serial Getty on ttyS0. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Getty on tty1. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Reached target Login Prompts. Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.820 INFO Putting http://169.254.169.254/latest/api/token: Attempt #2 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.821 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.821 INFO Fetch successful Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.821 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.822 INFO Fetch successful Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.822 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.822 INFO Fetch successful Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.822 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.823 INFO Fetch failed with 404: resource not found Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.823 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.823 INFO Fetch failed with 404: resource not found Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.823 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.824 INFO Fetch successful Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.824 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.825 INFO Fetch successful Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.825 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.825 INFO Fetch failed with 404: resource not found Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.825 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 23 15:40:54 ip-10-0-136-68 afterburn[1507]: Feb 23 15:40:54.826 INFO Fetch successful Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Afterburn (Metadata). Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Fetch kubelet node name from AWS Metadata... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Fetch kubelet provider id from AWS Metadata... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: aws-kubelet-nodename.service: Succeeded. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Fetch kubelet node name from AWS Metadata. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: aws-kubelet-nodename.service: Consumed 2ms CPU time Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: aws-kubelet-providerid.service: Succeeded. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Fetch kubelet provider id from AWS Metadata. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: aws-kubelet-providerid.service: Consumed 2ms CPU time Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Reached target Network is Online. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Dynamically sets the system reserved for the kubelet... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting Machine Config Daemon Pull... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting NFS status monitor for NFSv2/3 locking.... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started Dynamically sets the system reserved for the kubelet. Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Starting RPC Bind... Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started RPC Bind. Feb 23 15:40:54 ip-10-0-136-68 rpc.statd[1910]: Version 2.3.3 starting Feb 23 15:40:54 ip-10-0-136-68 rpc.statd[1910]: Flags: TI-RPC Feb 23 15:40:54 ip-10-0-136-68 rpc.statd[1910]: Initializing NSM state Feb 23 15:40:54 ip-10-0-136-68 systemd[1]: Started NFS status monitor for NFSv2/3 locking.. Feb 23 15:41:00 ip-10-0-136-68 chronyd[1534]: Selected source 169.254.169.123 Feb 23 15:41:02 ip-10-0-136-68 sh[1908]: b6b4f5d89be886f7fe1b314e271801bcae46a3912b44c41a3565ca13b6db4e66 Feb 23 15:41:02 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. Feb 23 15:41:02 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Created slice machine.slice. Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Started libcontainer container 760eb6ea741a04fb070c375d76006b6bd055ff9269c710da076e32e8b04a6e63. Feb 23 15:41:03 ip-10-0-136-68 kernel: cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: libpod-760eb6ea741a04fb070c375d76006b6bd055ff9269c710da076e32e8b04a6e63.scope: Succeeded. Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: libpod-760eb6ea741a04fb070c375d76006b6bd055ff9269c710da076e32e8b04a6e63.scope: Consumed 42ms CPU time Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Started Machine Config Daemon Pull. Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Starting Machine Config Daemon Firstboot... Feb 23 15:41:03 ip-10-0-136-68 sh[2023]: sed: can't read /etc/yum.repos.d/*.repo: No such file or directory Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.526507 2025 update.go:2103] Running: systemctl daemon-reload Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 15:41:03 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.628888 2025 rpm-ostree.go:85] Enabled workaround for bug 2111817 Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 15:41:03 ip-10-0-136-68 rpm-ostree[2079]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 15:41:03 ip-10-0-136-68 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.32' (uid=0 pid=2079 comm="/usr/bin/rpm-ostree start-daemon " label="system_u:system_r:install_t:s0") Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Starting Authorization Manager... Feb 23 15:41:03 ip-10-0-136-68 polkitd[2083]: Started polkitd version 0.115 Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-647504fcbac9f2525a9b17e6a28e563b9c67cce0f92ed75972e94b8125080437-merged.mount: Succeeded. Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-647504fcbac9f2525a9b17e6a28e563b9c67cce0f92ed75972e94b8125080437-merged.mount: Consumed 0 CPU time Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-760eb6ea741a04fb070c375d76006b6bd055ff9269c710da076e32e8b04a6e63-userdata-shm.mount: Succeeded. Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-760eb6ea741a04fb070c375d76006b6bd055ff9269c710da076e32e8b04a6e63-userdata-shm.mount: Consumed 0 CPU time Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 0 CPU time Feb 23 15:41:03 ip-10-0-136-68 polkitd[2083]: Loading rules from directory /etc/polkit-1/rules.d Feb 23 15:41:03 ip-10-0-136-68 polkitd[2083]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 23 15:41:03 ip-10-0-136-68 polkitd[2083]: Finished loading, compiling and executing 3 rules Feb 23 15:41:03 ip-10-0-136-68 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Started Authorization Manager. Feb 23 15:41:03 ip-10-0-136-68 polkitd[2083]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 23 15:41:03 ip-10-0-136-68 rpm-ostree[2079]: In idle state; will auto-exit in 63 seconds Feb 23 15:41:03 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 15:41:03 ip-10-0-136-68 rpm-ostree[2079]: client(id:cli dbus:1.35 unit:machine-config-daemon-firstboot.service uid:0) added; new total=1 Feb 23 15:41:03 ip-10-0-136-68 rpm-ostree[2079]: client(id:cli dbus:1.35 unit:machine-config-daemon-firstboot.service uid:0) vanished; remaining=0 Feb 23 15:41:03 ip-10-0-136-68 rpm-ostree[2079]: In idle state; will auto-exit in 60 seconds Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.941953 2025 daemon.go:245] Booted osImageURL: (412.86.202301311551-0) db83d20cf09a263777fcca78594b16da00af8acc245d29cc2a1344abc3f0dac2 Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.942372 2025 rpm-ostree.go:411] Running captured: rpm-ostree --version Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.957644 2025 daemon.go:921] rpm-ostree has container feature Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.958421 2025 update.go:2140] Adding SIGTERM protection Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.958578 2025 update.go:542] Checking Reconcilable for config mco-empty-mc to rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138 Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.958949 2025 update.go:2118] Starting update from mco-empty-mc to rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138: &{osUpdate:true kargs:false fips:false passwd:false files:false units:false kernelType:false extensions:false} Feb 23 15:41:03 ip-10-0-136-68 root[2098]: machine-config-daemon[2025]: Starting update from mco-empty-mc to rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138: &{osUpdate:true kargs:false fips:false passwd:false files:false units:false kernelType:false extensions:false} Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.963959 2025 update.go:1244] Updating files Feb 23 15:41:03 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:03.963974 2025 update.go:1310] Deleting stale data Feb 23 15:41:04 ip-10-0-136-68 systemd[1]: NetworkManager-dispatcher.service: Succeeded. Feb 23 15:41:04 ip-10-0-136-68 systemd[1]: NetworkManager-dispatcher.service: Consumed 132ms CPU time Feb 23 15:41:05 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:05.339918 2025 run.go:19] Running: nice -- ionice -c 3 oc image extract -v 10 --path /:/run/mco-extensions/os-extensions-content-2921041498 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94960abf9d2a7d4f8335baa5d2ca47c5bdb1e91a3142d9342c17f05164a12d63 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.156029 2110 client_mirrored.go:174] Attempting to connect to quay.io/openshift-release-dev/ocp-v4.0-art-dev Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.158956 2110 round_trippers.go:466] curl -v -XGET -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://quay.io/v2/' Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.162448 2110 round_trippers.go:495] HTTP Trace: DNS Lookup for quay.io resolved to [{52.203.129.140 } {52.54.221.77 } {50.19.136.128 } {44.206.77.171 } {3.224.202.81 } {34.200.46.16 } {54.163.152.191 } {34.198.72.3 } {2600:1f18:483:cf00:fb5d:7114:9628:b212 } {2600:1f18:483:cf00:a591:f596:f909:37df } {2600:1f18:483:cf02:d358:8ea3:472d:35d5 } {2600:1f18:483:cf01:5245:3892:89b4:b4bd } {2600:1f18:483:cf02:d0d3:7b3b:6c76:3787 } {2600:1f18:483:cf02:d93d:e5ff:3c4c:1972 } {2600:1f18:483:cf01:2ee4:a0c9:470f:2f20 } {2600:1f18:483:cf01:f2ab:7616:76d4:6fb4 }] Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.226330 2110 round_trippers.go:510] HTTP Trace: Dial to tcp:52.203.129.140:443 succeed Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452380 2110 round_trippers.go:553] GET https://quay.io/v2/ 401 Unauthorized in 293 milliseconds Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452399 2110 round_trippers.go:570] HTTP Statistics: DNSLookup 1 ms Dial 63 ms TLSHandshake 147 ms ServerProcessing 78 ms Duration 293 ms Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452406 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452411 2110 round_trippers.go:580] Server: nginx/1.20.1 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452417 2110 round_trippers.go:580] Www-Authenticate: Bearer realm="https://quay.io/v2/auth",service="quay.io" Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452424 2110 round_trippers.go:580] Docker-Distribution-Api-Version: registry/2.0 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452429 2110 round_trippers.go:580] Date: Thu, 23 Feb 2023 15:41:06 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452436 2110 round_trippers.go:580] Content-Type: text/html; charset=utf-8 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.452441 2110 round_trippers.go:580] Content-Length: 4 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.453184 2110 round_trippers.go:466] curl -v -XGET -H "Authorization: Basic " -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://quay.io/v2/auth?account=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&scope=repository%3Aopenshift-release-dev%2Focp-v4.0-art-dev%3Apull&service=quay.io' Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579212 2110 round_trippers.go:553] GET https://quay.io/v2/auth?account=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&scope=repository%3Aopenshift-release-dev%2Focp-v4.0-art-dev%3Apull&service=quay.io 200 OK in 126 milliseconds Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579256 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 125 ms Duration 126 ms Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579267 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579274 2110 round_trippers.go:580] Server: nginx/1.20.1 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579284 2110 round_trippers.go:580] Cache-Control: no-cache, no-store, must-revalidate Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579289 2110 round_trippers.go:580] X-Frame-Options: DENY Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579293 2110 round_trippers.go:580] Strict-Transport-Security: max-age=63072000; preload Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579300 2110 round_trippers.go:580] Date: Thu, 23 Feb 2023 15:41:06 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579304 2110 round_trippers.go:580] Content-Type: application/json Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579308 2110 round_trippers.go:580] Content-Length: 1246 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.579383 2110 round_trippers.go:466] curl -v -XGET -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" -H "Accept: application/vnd.docker.distribution.manifest.list.v2+json" -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -H "Accept: application/vnd.oci.image.manifest.v1+json" -H "Authorization: Bearer " 'https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/manifests/sha256:94960abf9d2a7d4f8335baa5d2ca47c5bdb1e91a3142d9342c17f05164a12d63' Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677100 2110 round_trippers.go:553] GET https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/manifests/sha256:94960abf9d2a7d4f8335baa5d2ca47c5bdb1e91a3142d9342c17f05164a12d63 200 OK in 97 milliseconds Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677128 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 97 ms Duration 97 ms Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677133 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677143 2110 round_trippers.go:580] X-Frame-Options: DENY Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677148 2110 round_trippers.go:580] Strict-Transport-Security: max-age=63072000; preload Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677153 2110 round_trippers.go:580] Date: Thu, 23 Feb 2023 15:41:06 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677157 2110 round_trippers.go:580] Content-Type: application/vnd.docker.distribution.manifest.v2+json Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677162 2110 round_trippers.go:580] Content-Length: 759 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677166 2110 round_trippers.go:580] Server: nginx/1.20.1 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677171 2110 round_trippers.go:580] Docker-Content-Digest: sha256:94960abf9d2a7d4f8335baa5d2ca47c5bdb1e91a3142d9342c17f05164a12d63 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677322 2110 client_mirrored.go:412] get manifest for sha256:94960abf9d2a7d4f8335baa5d2ca47c5bdb1e91a3142d9342c17f05164a12d63 served from registryclient.retryManifest{ManifestService:registryclient.manifestServiceVerifier{ManifestService:(*client.manifests)(0xc000d8dec0)}, repo:(*registryclient.retryRepository)(0xc00073af80)}: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677359 2110 client_mirrored.go:174] Attempting to connect to quay.io/openshift-release-dev/ocp-v4.0-art-dev Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.677448 2110 round_trippers.go:466] curl -v -XGET -H "Accept-Encoding: identity" -H "Authorization: Bearer " -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:5b1074a0dda6351b90379dca3d3921dc98b700cf30e3a54adae4533c64a8214e' Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760073 2110 round_trippers.go:553] GET https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:5b1074a0dda6351b90379dca3d3921dc98b700cf30e3a54adae4533c64a8214e 302 Found in 82 milliseconds Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760087 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 82 ms Duration 82 ms Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760092 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760101 2110 round_trippers.go:580] Date: Thu, 23 Feb 2023 15:41:06 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760107 2110 round_trippers.go:580] Content-Length: 1463 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760114 2110 round_trippers.go:580] Location: https://cdn02.quay.io/sha256/5b/5b1074a0dda6351b90379dca3d3921dc98b700cf30e3a54adae4533c64a8214e?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167466&Signature=Z125RPWLtxzwagDzT6T0-diLNBFZ8VdFVMKsWVXYQ-oMLc~Pa2eoyFr0bNOfg1FJsh0ryT2BKhnO8gRwzJnQb7Bl7v7WvJlTgJ8QpnnINLYKqxUdNWBma~EI3nK-z~OuX5mtOSAsxIUsq2b1MZxB3kFdCc7iwEJDAGUDqB3nd2mBjCNoiAPqb~aIxghipro45Q3cV1eBTfNX9S5PEdmNg66NaK~7Ghh7oMLnMiaKkw7IUkkUsfv1m7NfUz~0sY~Hm1fFRgjdBVOdAx-UDES~eEUza0uhuPu039PS0HvKE00mBb07476H-axu1OWfUCq5weK-mUrYY4BMjWat2Va5xg__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760129 2110 round_trippers.go:580] Server: nginx/1.20.1 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760134 2110 round_trippers.go:580] X-Frame-Options: DENY Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760142 2110 round_trippers.go:580] Content-Type: text/html; charset=utf-8 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760158 2110 round_trippers.go:580] Docker-Content-Digest: sha256:5b1074a0dda6351b90379dca3d3921dc98b700cf30e3a54adae4533c64a8214e Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760164 2110 round_trippers.go:580] Accept-Ranges: bytes Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760168 2110 round_trippers.go:580] Cache-Control: max-age=31536000 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760172 2110 round_trippers.go:580] Strict-Transport-Security: max-age=63072000; preload Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760212 2110 round_trippers.go:466] curl -v -XGET -H "Accept-Encoding: identity" -H "Referer: https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:5b1074a0dda6351b90379dca3d3921dc98b700cf30e3a54adae4533c64a8214e" -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://cdn02.quay.io/sha256/5b/5b1074a0dda6351b90379dca3d3921dc98b700cf30e3a54adae4533c64a8214e?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167466&Signature=Z125RPWLtxzwagDzT6T0-diLNBFZ8VdFVMKsWVXYQ-oMLc~Pa2eoyFr0bNOfg1FJsh0ryT2BKhnO8gRwzJnQb7Bl7v7WvJlTgJ8QpnnINLYKqxUdNWBma~EI3nK-z~OuX5mtOSAsxIUsq2b1MZxB3kFdCc7iwEJDAGUDqB3nd2mBjCNoiAPqb~aIxghipro45Q3cV1eBTfNX9S5PEdmNg66NaK~7Ghh7oMLnMiaKkw7IUkkUsfv1m7NfUz~0sY~Hm1fFRgjdBVOdAx-UDES~eEUza0uhuPu039PS0HvKE00mBb07476H-axu1OWfUCq5weK-mUrYY4BMjWat2Va5xg__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA' Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.760776 2110 round_trippers.go:495] HTTP Trace: DNS Lookup for cdn02.quay.io resolved to [{99.84.66.73 } {99.84.66.104 } {99.84.66.57 } {99.84.66.31 } {2600:9000:2163:6c00:19:165c:1440:93a1 } {2600:9000:2163:8a00:19:165c:1440:93a1 } {2600:9000:2163:7000:19:165c:1440:93a1 } {2600:9000:2163:2800:19:165c:1440:93a1 } {2600:9000:2163:1800:19:165c:1440:93a1 } {2600:9000:2163:6200:19:165c:1440:93a1 } {2600:9000:2163:8200:19:165c:1440:93a1 } {2600:9000:2163:e200:19:165c:1440:93a1 }] Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.766843 2110 round_trippers.go:510] HTTP Trace: Dial to tcp:99.84.66.73:443 succeed Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.785943 2110 round_trippers.go:553] GET https://cdn02.quay.io/sha256/5b/5b1074a0dda6351b90379dca3d3921dc98b700cf30e3a54adae4533c64a8214e?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167466&Signature=Z125RPWLtxzwagDzT6T0-diLNBFZ8VdFVMKsWVXYQ-oMLc~Pa2eoyFr0bNOfg1FJsh0ryT2BKhnO8gRwzJnQb7Bl7v7WvJlTgJ8QpnnINLYKqxUdNWBma~EI3nK-z~OuX5mtOSAsxIUsq2b1MZxB3kFdCc7iwEJDAGUDqB3nd2mBjCNoiAPqb~aIxghipro45Q3cV1eBTfNX9S5PEdmNg66NaK~7Ghh7oMLnMiaKkw7IUkkUsfv1m7NfUz~0sY~Hm1fFRgjdBVOdAx-UDES~eEUza0uhuPu039PS0HvKE00mBb07476H-axu1OWfUCq5weK-mUrYY4BMjWat2Va5xg__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA 200 OK in 25 milliseconds Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.785958 2110 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 6 ms TLSHandshake 9 ms ServerProcessing 9 ms Duration 25 ms Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.785965 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.785972 2110 round_trippers.go:580] Etag: "e94acdb6d325f1500de7000d66c76216-1" Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.785980 2110 round_trippers.go:580] X-Cache: Hit from cloudfront Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.785990 2110 round_trippers.go:580] X-Amz-Cf-Id: 0AAiv6p8sbfJ4xnaiyY4wzdELKz3y055k7h1QHfJpbqoiFElhNYgyg== Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.785997 2110 round_trippers.go:580] X-Amz-Version-Id: aneB9.Uuf3TujHGaaHNAgr1bTC70aFuv Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786005 2110 round_trippers.go:580] X-Amz-Cf-Pop: HIO50-C1 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786016 2110 round_trippers.go:580] Via: 1.1 4f87745990545c1ac0195c157e1668f8.cloudfront.net (CloudFront) Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786025 2110 round_trippers.go:580] Content-Type: binary/octet-stream Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786039 2110 round_trippers.go:580] Date: Fri, 17 Feb 2023 06:56:13 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786051 2110 round_trippers.go:580] X-Amz-Replication-Status: COMPLETED Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786058 2110 round_trippers.go:580] Last-Modified: Fri, 17 Feb 2023 05:58:07 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786065 2110 round_trippers.go:580] Accept-Ranges: bytes Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786076 2110 round_trippers.go:580] Content-Length: 7554 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786086 2110 round_trippers.go:580] X-Amz-Server-Side-Encryption: AES256 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786098 2110 round_trippers.go:580] Server: AmazonS3 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786108 2110 round_trippers.go:580] Age: 549893 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.786175 2110 client_mirrored.go:445] get for sha256:5b1074a0dda6351b90379dca3d3921dc98b700cf30e3a54adae4533c64a8214e served from openshift-release-dev/ocp-v4.0-art-dev: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.787681 2110 manifest.go:319] Raw image config json: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: {"created":"2023-02-17T03:21:12.741931194Z","architecture":"amd64","os":"linux","config":{"ExposedPorts":{"9091/tcp":{}},"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","container=oci"],"Cmd":["./usr/bin/webserver"],"Labels":{"architecture":"x86_64","build-date":"2023-02-07T16:24:49","com.redhat.component":"ubi9-container","com.redhat.license_terms":"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI","description":"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.","distribution-scope":"public","io.buildah.version":"1.28.0","io.k8s.description":"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.","io.k8s.display-name":"Red Hat Universal Base Image 9","io.openshift.expose-services":"","io.openshift.tags":"base rhel9","maintainer":"Red Hat, Inc.","name":"ubi9","release":"1750.1675784955","summary":"Provides the latest release of Red Hat Universal Base Image 9.","url":"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.1.0-1750.1675784955","vcs-ref":"cf87ad00feaef3d9d7a442dad55ab6a14f6a3f81","vcs-type":"git","vendor":"Red Hat, Inc.","version":"412.86.202302170236-0"}},"rootfs":{"type":"layers","diff_ids":["sha256:17e4bf9b37cb91b309570030506a32773e48e3f055d915eb92adad9dabaeb38e","sha256:d6c7857e32f77b97c3634011d06f1bdde27c63962258ceb0c4d253820990cb07","sha256:8c175f4ba3fe3ed1a33090a7b91586267145b9fe10caa77436c5f9ce40112f34"]},"history":[{"created":"2023-02-07T16:57:16.001856542Z","created_by":"/bin/sh -c #(nop) ADD file:da78e7c5cd6719890e467b4bd291df2c606a754b0aa9a2f1d79e3fc0b50e2b75 in / ","empty_layer":true},{"created":"2023-02-07T16:57:16.690811836Z","created_by":"/bin/sh -c mv -f /etc/yum.repos.d/ubi.repo /tmp || :","empty_layer":true},{"created":"2023-02-07T16:57:16.972770212Z","created_by":"/bin/sh -c #(nop) ADD file:214c1de395c24e4a86ef9a706069ef30a9e804c63f851c37c35655e16fea3ced in /tmp/tls-ca-bundle.pem ","empty_layer":true},{"created":"2023-02-07T16:57:17.309810122Z","created_by":"/bin/sh -c #(nop) ADD multi:6893bb0509c7aae7bc271b3e27ee01082fe34bd3f5e8d8e4ad49d547e73ac56f in /etc/yum.repos.d/ ","empty_layer":true},{"created":"2023-02-07T16:57:17.309862529Z","created_by":"/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"","empty_layer":true},{"created":"2023-02-07T16:57:17.309970093Z","created_by":"/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-container\" name=\"ubi9\" version=\"9.1.0\"","empty_layer":true},{"created":"2023-02-07T16:57:17.310012659Z","created_by":"/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"","empty_layer":true},{"created":"2023-02-07T16:57:17.310048941Z","created_by":"/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of Red Hat Universal Base Image 9.\"","empty_layer":true},{"created":"2023-02-07T16:57:17.310147568Z","created_by":"/bin/sh -c #(nop) LABEL description=\"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"","empty_layer":true},{"created":"2023-02-07T16:57:17.310191154Z","created_by":"/bin/sh -c #(nop) LABEL io.k8s.display-name=\"Red Hat Universal Base Image 9\"","empty_layer":true},{"created":"2023-02-07T16:57:17.310222491Z","created_by":"/bin/sh -c #(nop) LABEL io.openshift.expose-services=\"\"","empty_layer":true},{"created":"2023-02-07T16:57:17.310244918Z","created_by":"/bin/sh -c #(nop) LABEL io.openshift.tags=\"base rhel9\"","empty_layer":true},{"created":"2023-02-07T16:57:17.310268915Z","created_by":"/bin/sh -c #(nop) ENV container oci","empty_layer":true},{"created":"2023-02-07T16:57:17.310340001Z","created_by":"/bin/sh -c #(nop) ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","empty_layer":true},{"created":"2023-02-07T16:57:17.310351744Z","created_by":"/bin/sh -c #(nop) CMD [\"/bin/bash\"]","empty_layer":true},{"created":"2023-02-07T16:57:18.004068801Z","created_by":"/bin/sh -c rm -rf /var/log/*","empty_layer":true},{"created":"2023-02-07T16:57:18.707878621Z","created_by":"/bin/sh -c mkdir -p /var/log/rhsm","empty_layer":true},{"created":"2023-02-07T16:57:19.000970028Z","created_by":"/bin/sh -c #(nop) ADD file:88a9fd2336e4b0772f4146ae42db4636bb188b5e838b3cba3d227f5aa78da85a in /root/buildinfo/content_manifests/ubi9-container-9.1.0-1750.1675784955.json ","empty_layer":true},{"created":"2023-02-07T16:57:19.290964736Z","created_by":"/bin/sh -c #(nop) ADD file:cb5c5d9f71a5c35705bb66fbe6ecaf519d9d6fa926a84326e55447ecad24fba1 in /root/buildinfo/Dockerfile-ubi9-9.1.0-1750.1675784955 ","empty_layer":true},{"created":"2023-02-07T16:57:19.291218936Z","created_by":"/bin/sh -c #(nop) LABEL \"release\"=\"1750.1675784955\" \"distribution-scope\"=\"public\" \"vendor\"=\"Red Hat, Inc.\" \"build-date\"=\"2023-02-07T16:24:49\" \"architecture\"=\"x86_64\" \"vcs-type\"=\"git\" \"vcs-ref\"=\"cf87ad00feaef3d9d7a442dad55ab6a14f6a3f81\" \"io.k8s.description\"=\"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\" \"url\"=\"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.1.0-1750.1675784955\"","empty_layer":true},{"created":"2023-02-07T16:57:19.986849255Z","created_by":"/bin/sh -c rm -f '/etc/yum.repos.d/odcs-1774985-d0e8e.repo' '/etc/yum.repos.d/gitweb-1077d.repo'","empty_layer":true},{"created":"2023-02-07T16:57:20.688573778Z","created_by":"/bin/sh -c rm -f /tmp/tls-ca-bundle.pem","empty_layer":true},{"created":"2023-02-07T16:57:22.933099761Z","created_by":"/bin/sh -c mv -fZ /tmp/ubi.repo /etc/yum.repos.d/ubi.repo || :"},{"created":"2023-02-17T03:21:03.32457121Z","created_by":"/bin/sh -c #(nop) COPY file:742cc6702ca7b15b82936a9cef75cd84a0fcc02f92b93dd5a058c745c50e64df in /usr/bin/webserver ","comment":"FROM registry.access.redhat.com/ubi9/ubi:latest"},{"created":"2023-02-17T03:21:09.83790263Z","created_by":"/bin/sh -c #(nop) COPY dir:d7a6ea63ace54590a47eafc51af50a532852c847c230bc2cdfb89853b124aaef in /usr/share/rpm-ostree/extensions/ ","comment":"FROM a0b99cdedab5"},{"created":"2023-02-17T03:21:12.698095091Z","created_by":"/bin/sh -c #(nop) CMD [\"./usr/bin/webserver\"]","comment":"FROM 7c6c1d93b0ee","empty_layer":true},{"created":"2023-02-17T03:21:12.722592723Z","created_by":"/bin/sh -c #(nop) EXPOSE 9091/tcp","comment":"FROM 7e55995514c8","empty_layer":true},{"created":"2023-02-17T03:21:12.742279227Z","created_by":"/bin/sh -c #(nop) LABEL \"version\"=\"412.86.202302170236-0\"","comment":"FROM 0573ede07ab8","empty_layer":true}]} Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.787927 2110 extract.go:487] Extracting from layer: distribution.Descriptor{MediaType:"application/vnd.docker.image.rootfs.diff.tar.gzip", Size:78990287, Digest:"sha256:bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3", URLs:[]string(nil), Annotations:map[string]string(nil), Platform:(*v1.Platform)(nil)} Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.787945 2110 client_mirrored.go:174] Attempting to connect to quay.io/openshift-release-dev/ocp-v4.0-art-dev Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.788000 2110 round_trippers.go:466] curl -v -XGET -H "Accept-Encoding: identity" -H "Authorization: Bearer " -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3' Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876651 2110 round_trippers.go:553] GET https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3 302 Found in 88 milliseconds Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876668 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 88 ms Duration 88 ms Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876676 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876691 2110 round_trippers.go:580] Date: Thu, 23 Feb 2023 15:41:06 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876703 2110 round_trippers.go:580] Content-Type: text/html; charset=utf-8 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876710 2110 round_trippers.go:580] Content-Length: 1463 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876722 2110 round_trippers.go:580] Cache-Control: max-age=31536000 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876730 2110 round_trippers.go:580] Strict-Transport-Security: max-age=63072000; preload Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876741 2110 round_trippers.go:580] Location: https://cdn02.quay.io/sha256/bb/bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167466&Signature=NS0PQsUmvUHd79h6JJwDfzwnqRyWkhrWYN82w5BIRncYU-BFlnQ61sVPKEPspvvnlXF1IkM~kc0qgkkF~FVJQhq1CS9rxcy536zPUCDlkBD3v7EDc1zFvyZ09q4cMjRLDHCSkvxTpr~BhWy2yZnaB91Quf~3QgEvbzjnGyph9eVgm6SWC0MCvOtf5awuNckMWO5k4pQ1ko0fQT-tICNxox3ASfmk98BFDak~9n8sP1~HEJNqZj7zqx8eBLtT193CbHDCyFsQiz093733atcYxwCubS1d4OLK4ZHfF6G~Wz2uvf6BMiUPMSvm-gKnrzLxhc~dIYZbWrTDIG2P2-D2~Q__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876762 2110 round_trippers.go:580] Server: nginx/1.20.1 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876773 2110 round_trippers.go:580] Docker-Content-Digest: sha256:bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876782 2110 round_trippers.go:580] Accept-Ranges: bytes Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876791 2110 round_trippers.go:580] X-Frame-Options: DENY Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.876832 2110 round_trippers.go:466] curl -v -XGET -H "Accept-Encoding: identity" -H "Referer: https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3" -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://cdn02.quay.io/sha256/bb/bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167466&Signature=NS0PQsUmvUHd79h6JJwDfzwnqRyWkhrWYN82w5BIRncYU-BFlnQ61sVPKEPspvvnlXF1IkM~kc0qgkkF~FVJQhq1CS9rxcy536zPUCDlkBD3v7EDc1zFvyZ09q4cMjRLDHCSkvxTpr~BhWy2yZnaB91Quf~3QgEvbzjnGyph9eVgm6SWC0MCvOtf5awuNckMWO5k4pQ1ko0fQT-tICNxox3ASfmk98BFDak~9n8sP1~HEJNqZj7zqx8eBLtT193CbHDCyFsQiz093733atcYxwCubS1d4OLK4ZHfF6G~Wz2uvf6BMiUPMSvm-gKnrzLxhc~dIYZbWrTDIG2P2-D2~Q__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA' Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884079 2110 round_trippers.go:553] GET https://cdn02.quay.io/sha256/bb/bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167466&Signature=NS0PQsUmvUHd79h6JJwDfzwnqRyWkhrWYN82w5BIRncYU-BFlnQ61sVPKEPspvvnlXF1IkM~kc0qgkkF~FVJQhq1CS9rxcy536zPUCDlkBD3v7EDc1zFvyZ09q4cMjRLDHCSkvxTpr~BhWy2yZnaB91Quf~3QgEvbzjnGyph9eVgm6SWC0MCvOtf5awuNckMWO5k4pQ1ko0fQT-tICNxox3ASfmk98BFDak~9n8sP1~HEJNqZj7zqx8eBLtT193CbHDCyFsQiz093733atcYxwCubS1d4OLK4ZHfF6G~Wz2uvf6BMiUPMSvm-gKnrzLxhc~dIYZbWrTDIG2P2-D2~Q__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA 200 OK in 7 milliseconds Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884090 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 7 ms Duration 7 ms Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884095 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884100 2110 round_trippers.go:580] Etag: "712213b6a7ede4720d432d8298a09d64-1" Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884104 2110 round_trippers.go:580] Via: 1.1 4f87745990545c1ac0195c157e1668f8.cloudfront.net (CloudFront) Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884112 2110 round_trippers.go:580] Date: Wed, 15 Feb 2023 13:32:57 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884124 2110 round_trippers.go:580] X-Amz-Cf-Pop: HIO50-C1 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884135 2110 round_trippers.go:580] X-Amz-Replication-Status: COMPLETED Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884147 2110 round_trippers.go:580] X-Amz-Storage-Class: INTELLIGENT_TIERING Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884159 2110 round_trippers.go:580] X-Amz-Version-Id: XjlxLywSUyE72cUZCpsHlYqPXR94Fo2a Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884172 2110 round_trippers.go:580] X-Cache: Hit from cloudfront Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884184 2110 round_trippers.go:580] Age: 698890 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884192 2110 round_trippers.go:580] Content-Length: 78990287 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884204 2110 round_trippers.go:580] Last-Modified: Thu, 09 Feb 2023 12:55:38 GMT Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884229 2110 round_trippers.go:580] X-Amz-Server-Side-Encryption: AES256 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884242 2110 round_trippers.go:580] Accept-Ranges: bytes Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884250 2110 round_trippers.go:580] Server: AmazonS3 Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884261 2110 round_trippers.go:580] X-Amz-Cf-Id: h0F0lcFdrG3tjBt9YEJyEh9cHp_ZhUIWzERQc2mVrDL_4z_G75K14A== Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884273 2110 round_trippers.go:580] Content-Type: binary/octet-stream Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884285 2110 client_mirrored.go:485] open (read) sha256:bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3 from openshift-release-dev/ocp-v4.0-art-dev: Feb 23 15:41:06 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:06.884297 2110 extract.go:509] Extracting layer sha256:bb18ae335fc282652ff9a9504413db8ff2f4864cc5d8c224b2359f446aad67c3 with options &archive.TarOptions{IncludeFiles:[]string(nil), ExcludePatterns:[]string(nil), Compression:0, NoLchown:false, ChownOpts:(*idtools.Identity)(nil), IncludeSourceDir:false, WhiteoutFormat:0, NoOverwriteDirNonDir:false, RebaseNames:map[string]string(nil), InUserNS:false, Chown:false, AlterHeaders:extract.alterations{extract.removePermissions{}}} Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.767049 2110 extract.go:487] Extracting from layer: distribution.Descriptor{MediaType:"application/vnd.docker.image.rootfs.diff.tar.gzip", Size:3735217, Digest:"sha256:c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580", URLs:[]string(nil), Annotations:map[string]string(nil), Platform:(*v1.Platform)(nil)} Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.767088 2110 client_mirrored.go:174] Attempting to connect to quay.io/openshift-release-dev/ocp-v4.0-art-dev Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.767179 2110 round_trippers.go:466] curl -v -XGET -H "Accept-Encoding: identity" -H "Authorization: Bearer " -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580' Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852832 2110 round_trippers.go:553] GET https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580 302 Found in 85 milliseconds Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852849 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 85 ms Duration 85 ms Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852855 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852860 2110 round_trippers.go:580] Date: Thu, 23 Feb 2023 15:41:08 GMT Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852865 2110 round_trippers.go:580] Location: https://cdn02.quay.io/sha256/c1/c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167468&Signature=Sc9RaBNbo7ncINqr8Q~ROAqFyvhoJRS2HwXUx1mYUbuk2Pq4DP33hSdoBxjXojoU1yOkRR8V~o6j0PHQs-Wmqtn6Ust1TUJwwFYF2xHHyQ8yEEqSn-odm5Wj2vtxk7oJC7~PzppsBhhVhWPGMw4xfCf20eClSKO37kyLghuFmUjEO0vTNcj2gBa7HWudPulKpULFEfcZ~kNaFuF0fDTvIFFDyiNtTSZFuTvvdudmn11DwYjUGfYdnr4J5hSq8CcgzmiqHakOXZpRVkMiMHc6RUscQHR7AY-a~ZnIdH1y2LQwhdD0Fq7Bo-q3cfrzK1HrBLG0Y6vMW1qV3VwozAkYHg__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852870 2110 round_trippers.go:580] Server: nginx/1.20.1 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852875 2110 round_trippers.go:580] Docker-Content-Digest: sha256:c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852882 2110 round_trippers.go:580] Cache-Control: max-age=31536000 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852889 2110 round_trippers.go:580] Content-Type: text/html; charset=utf-8 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852896 2110 round_trippers.go:580] Content-Length: 1463 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852903 2110 round_trippers.go:580] Accept-Ranges: bytes Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852910 2110 round_trippers.go:580] X-Frame-Options: DENY Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852916 2110 round_trippers.go:580] Strict-Transport-Security: max-age=63072000; preload Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.852957 2110 round_trippers.go:466] curl -v -XGET -H "Accept-Encoding: identity" -H "Referer: https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580" -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://cdn02.quay.io/sha256/c1/c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167468&Signature=Sc9RaBNbo7ncINqr8Q~ROAqFyvhoJRS2HwXUx1mYUbuk2Pq4DP33hSdoBxjXojoU1yOkRR8V~o6j0PHQs-Wmqtn6Ust1TUJwwFYF2xHHyQ8yEEqSn-odm5Wj2vtxk7oJC7~PzppsBhhVhWPGMw4xfCf20eClSKO37kyLghuFmUjEO0vTNcj2gBa7HWudPulKpULFEfcZ~kNaFuF0fDTvIFFDyiNtTSZFuTvvdudmn11DwYjUGfYdnr4J5hSq8CcgzmiqHakOXZpRVkMiMHc6RUscQHR7AY-a~ZnIdH1y2LQwhdD0Fq7Bo-q3cfrzK1HrBLG0Y6vMW1qV3VwozAkYHg__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA' Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873445 2110 round_trippers.go:553] GET https://cdn02.quay.io/sha256/c1/c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167468&Signature=Sc9RaBNbo7ncINqr8Q~ROAqFyvhoJRS2HwXUx1mYUbuk2Pq4DP33hSdoBxjXojoU1yOkRR8V~o6j0PHQs-Wmqtn6Ust1TUJwwFYF2xHHyQ8yEEqSn-odm5Wj2vtxk7oJC7~PzppsBhhVhWPGMw4xfCf20eClSKO37kyLghuFmUjEO0vTNcj2gBa7HWudPulKpULFEfcZ~kNaFuF0fDTvIFFDyiNtTSZFuTvvdudmn11DwYjUGfYdnr4J5hSq8CcgzmiqHakOXZpRVkMiMHc6RUscQHR7AY-a~ZnIdH1y2LQwhdD0Fq7Bo-q3cfrzK1HrBLG0Y6vMW1qV3VwozAkYHg__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA 200 OK in 20 milliseconds Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873468 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 20 ms Duration 20 ms Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873476 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873485 2110 round_trippers.go:580] Content-Type: binary/octet-stream Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873503 2110 round_trippers.go:580] X-Amz-Server-Side-Encryption: AES256 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873510 2110 round_trippers.go:580] X-Cache: Hit from cloudfront Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873516 2110 round_trippers.go:580] Via: 1.1 4f87745990545c1ac0195c157e1668f8.cloudfront.net (CloudFront) Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873523 2110 round_trippers.go:580] X-Amz-Cf-Pop: HIO50-C1 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873530 2110 round_trippers.go:580] X-Amz-Cf-Id: S6qUHSI9NpLvVMi7ve3owfftrmpwfAwXIhP-oicanrb77Vkf0_O23g== Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873537 2110 round_trippers.go:580] Content-Length: 3735217 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873543 2110 round_trippers.go:580] X-Amz-Replication-Status: COMPLETED Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873550 2110 round_trippers.go:580] Accept-Ranges: bytes Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873557 2110 round_trippers.go:580] Server: AmazonS3 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873564 2110 round_trippers.go:580] Last-Modified: Fri, 17 Feb 2023 05:57:50 GMT Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873571 2110 round_trippers.go:580] Etag: "b04f9565216ecc2f3c2e2e96b7711299-1" Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873578 2110 round_trippers.go:580] X-Amz-Version-Id: AdT5HLRlpgXVsCEgHGnc2EXt701UqtR5 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873585 2110 round_trippers.go:580] Age: 549361 Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873591 2110 round_trippers.go:580] Date: Fri, 17 Feb 2023 07:05:08 GMT Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873602 2110 client_mirrored.go:485] open (read) sha256:c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580 from openshift-release-dev/ocp-v4.0-art-dev: Feb 23 15:41:08 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:08.873616 2110 extract.go:509] Extracting layer sha256:c14f91a846315818359b63ea9ab5bd4edf41af0afe08bf4a53e652a18867f580 with options &archive.TarOptions{IncludeFiles:[]string(nil), ExcludePatterns:[]string(nil), Compression:0, NoLchown:false, ChownOpts:(*idtools.Identity)(nil), IncludeSourceDir:false, WhiteoutFormat:0, NoOverwriteDirNonDir:false, RebaseNames:map[string]string(nil), InUserNS:false, Chown:false, AlterHeaders:extract.alterations{extract.removePermissions{}}} Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.053808 2110 extract.go:487] Extracting from layer: distribution.Descriptor{MediaType:"application/vnd.docker.image.rootfs.diff.tar.gzip", Size:228640823, Digest:"sha256:7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374", URLs:[]string(nil), Annotations:map[string]string(nil), Platform:(*v1.Platform)(nil)} Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.053840 2110 client_mirrored.go:174] Attempting to connect to quay.io/openshift-release-dev/ocp-v4.0-art-dev Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.053932 2110 round_trippers.go:466] curl -v -XGET -H "Accept-Encoding: identity" -H "Authorization: Bearer " -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374' Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136257 2110 round_trippers.go:553] GET https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374 302 Found in 82 milliseconds Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136277 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 82 ms Duration 82 ms Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136282 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136290 2110 round_trippers.go:580] Content-Type: text/html; charset=utf-8 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136297 2110 round_trippers.go:580] Server: nginx/1.20.1 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136303 2110 round_trippers.go:580] Docker-Content-Digest: sha256:7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136310 2110 round_trippers.go:580] Cache-Control: max-age=31536000 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136316 2110 round_trippers.go:580] Strict-Transport-Security: max-age=63072000; preload Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136320 2110 round_trippers.go:580] Date: Thu, 23 Feb 2023 15:41:09 GMT Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136324 2110 round_trippers.go:580] Content-Length: 1463 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136328 2110 round_trippers.go:580] Location: https://cdn02.quay.io/sha256/7d/7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167469&Signature=MuXN1Dzyeqzn~FjY1kXV1~AphqnudbKB~cL0cqXI33hOUhdP1fFPuOFNr0HQLD9xw~i8e9ZpSP5SSDuZbsN6lrY2Ln-OipSGxEKGsNMx0dgWhMQ9kEqyEK5AINFd5vqq~UsNXbTQO4mdWWeIqrcyvX14MCqCRQCnL6LiIfmg4V3Ry0DhuuyDn7Qs7h4LQbU0ayQg5AAIMK00Cnrv6jNE7j5NwE2bcp9DfuTTsuqEcowX3MRlbggzHzWbeGDxgzDJIvFAOhQNcm~~ZLwIA7F2SVMgpoPqMJ28vS2k8YNfyVUWZF5xMs8bjbqObx1g1WOgaKLbzjKRHYhYC6gOfhaZhQ__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136337 2110 round_trippers.go:580] Accept-Ranges: bytes Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136341 2110 round_trippers.go:580] X-Frame-Options: DENY Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.136376 2110 round_trippers.go:466] curl -v -XGET -H "Accept-Encoding: identity" -H "Referer: https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374" -H "User-Agent: oc/4.12.0 (linux/amd64) kubernetes/b05f7d4" 'https://cdn02.quay.io/sha256/7d/7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167469&Signature=MuXN1Dzyeqzn~FjY1kXV1~AphqnudbKB~cL0cqXI33hOUhdP1fFPuOFNr0HQLD9xw~i8e9ZpSP5SSDuZbsN6lrY2Ln-OipSGxEKGsNMx0dgWhMQ9kEqyEK5AINFd5vqq~UsNXbTQO4mdWWeIqrcyvX14MCqCRQCnL6LiIfmg4V3Ry0DhuuyDn7Qs7h4LQbU0ayQg5AAIMK00Cnrv6jNE7j5NwE2bcp9DfuTTsuqEcowX3MRlbggzHzWbeGDxgzDJIvFAOhQNcm~~ZLwIA7F2SVMgpoPqMJ28vS2k8YNfyVUWZF5xMs8bjbqObx1g1WOgaKLbzjKRHYhYC6gOfhaZhQ__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA' Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144175 2110 round_trippers.go:553] GET https://cdn02.quay.io/sha256/7d/7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374?username=openshift-release-dev%2Bmnguyenredhatcom1dzxwow96qtmaj3siqhrhenemor&namespace=openshift-release-dev&Expires=1677167469&Signature=MuXN1Dzyeqzn~FjY1kXV1~AphqnudbKB~cL0cqXI33hOUhdP1fFPuOFNr0HQLD9xw~i8e9ZpSP5SSDuZbsN6lrY2Ln-OipSGxEKGsNMx0dgWhMQ9kEqyEK5AINFd5vqq~UsNXbTQO4mdWWeIqrcyvX14MCqCRQCnL6LiIfmg4V3Ry0DhuuyDn7Qs7h4LQbU0ayQg5AAIMK00Cnrv6jNE7j5NwE2bcp9DfuTTsuqEcowX3MRlbggzHzWbeGDxgzDJIvFAOhQNcm~~ZLwIA7F2SVMgpoPqMJ28vS2k8YNfyVUWZF5xMs8bjbqObx1g1WOgaKLbzjKRHYhYC6gOfhaZhQ__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA 200 OK in 7 milliseconds Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144190 2110 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 7 ms Duration 7 ms Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144197 2110 round_trippers.go:577] Response Headers: Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144204 2110 round_trippers.go:580] Etag: "dae8fb966b2f65cd964f26e2f79b0bc7-1" Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144211 2110 round_trippers.go:580] Accept-Ranges: bytes Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144232 2110 round_trippers.go:580] Via: 1.1 4f87745990545c1ac0195c157e1668f8.cloudfront.net (CloudFront) Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144238 2110 round_trippers.go:580] X-Amz-Replication-Status: COMPLETED Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144245 2110 round_trippers.go:580] Content-Length: 228640823 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144252 2110 round_trippers.go:580] Date: Tue, 21 Feb 2023 08:12:36 GMT Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144259 2110 round_trippers.go:580] Last-Modified: Fri, 17 Feb 2023 05:58:00 GMT Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144266 2110 round_trippers.go:580] X-Amz-Storage-Class: INTELLIGENT_TIERING Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144273 2110 round_trippers.go:580] X-Amz-Version-Id: TueGah6J6dBaI3pEFpGqDAKK3TMX9hBp Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144280 2110 round_trippers.go:580] X-Cache: Hit from cloudfront Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144286 2110 round_trippers.go:580] Content-Type: binary/octet-stream Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144294 2110 round_trippers.go:580] Server: AmazonS3 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144301 2110 round_trippers.go:580] X-Amz-Cf-Id: ZuZNMMuuomSMz5dN820tJDbA4jNsqzqxXT6s22QbxS1QtUB05SpRsg== Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144308 2110 round_trippers.go:580] X-Amz-Server-Side-Encryption: AES256 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144322 2110 round_trippers.go:580] Age: 199714 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144333 2110 round_trippers.go:580] X-Amz-Cf-Pop: HIO50-C1 Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144345 2110 client_mirrored.go:485] open (read) sha256:7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374 from openshift-release-dev/ocp-v4.0-art-dev: Feb 23 15:41:09 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:09.144356 2110 extract.go:509] Extracting layer sha256:7deae50062a94ea127b2a095eecdb3cd1983afe3a072414f9aa480f772cef374 with options &archive.TarOptions{IncludeFiles:[]string(nil), ExcludePatterns:[]string(nil), Compression:0, NoLchown:false, ChownOpts:(*idtools.Identity)(nil), IncludeSourceDir:false, WhiteoutFormat:0, NoOverwriteDirNonDir:false, RebaseNames:map[string]string(nil), InUserNS:false, Chown:false, AlterHeaders:extract.alterations{extract.removePermissions{}}} Feb 23 15:41:12 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:12.743538 2110 workqueue.go:143] about to send work queue error: Feb 23 15:41:12 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:12.747594 2025 update.go:1967] Updating OS to layered image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5c5d9236d98a4fd98c75289fe406b52f948c5e604c65205a6bccaf2633c8bc9 Feb 23 15:41:12 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:12.747617 2025 rpm-ostree.go:411] Running captured: rpm-ostree --version Feb 23 15:41:12 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:12.761783 2025 rpm-ostree.go:354] Executing rebase to quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5c5d9236d98a4fd98c75289fe406b52f948c5e604c65205a6bccaf2633c8bc9 Feb 23 15:41:12 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:41:12.761801 2025 update.go:2103] Running: rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5c5d9236d98a4fd98c75289fe406b52f948c5e604c65205a6bccaf2633c8bc9 Feb 23 15:41:12 ip-10-0-136-68 rpm-ostree[2079]: client(id:cli dbus:1.36 unit:machine-config-daemon-firstboot.service uid:0) added; new total=1 Feb 23 15:41:16 ip-10-0-136-68 rpm-ostree[2079]: Locked sysroot Feb 23 15:41:16 ip-10-0-136-68 rpm-ostree[2079]: Initiated txn Rebase for client(id:cli dbus:1.36 unit:machine-config-daemon-firstboot.service uid:0): /org/projectatomic/rpmostree1/rhcos Feb 23 15:41:16 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 15:41:16 ip-10-0-136-68 rpm-ostree[2079]: Process [pid: 2120 uid: 0 unit: machine-config-daemon-firstboot.service] connected to transaction progress Feb 23 15:41:16 ip-10-0-136-68 machine-config-daemon[2120]: Pulling manifest: ostree-unverified-image:docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5c5d9236d98a4fd98c75289fe406b52f948c5e604c65205a6bccaf2633c8bc9 Feb 23 15:41:17 ip-10-0-136-68 machine-config-daemon[2120]: Importing: ostree-unverified-image:docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5c5d9236d98a4fd98c75289fe406b52f948c5e604c65205a6bccaf2633c8bc9 (digest: sha256:a5c5d9236d98a4fd98c75289fe406b52f948c5e604c65205a6bccaf2633c8bc9) Feb 23 15:41:17 ip-10-0-136-68 machine-config-daemon[2120]: ostree chunk layers stored: 0 needed: 51 (1.1 GB) Feb 23 15:41:17 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:e89a8408ae78 (276.9 MB) Feb 23 15:41:24 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Succeeded. Feb 23 15:41:24 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Consumed 39ms CPU time Feb 23 15:41:30 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:e89a8408ae78 Feb 23 15:41:30 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:47ab99b25400 (110.4 MB) Feb 23 15:41:33 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:47ab99b25400 Feb 23 15:41:33 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:27f262e70d98 (50.4 MB) Feb 23 15:41:35 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:27f262e70d98 Feb 23 15:41:35 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:3c345d059bf4 (45.5 MB) Feb 23 15:41:37 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:3c345d059bf4 Feb 23 15:41:37 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:d4e7d39a1e27 (52.5 MB) Feb 23 15:41:38 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:d4e7d39a1e27 Feb 23 15:41:38 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:eb24d6b2070c (38.9 MB) Feb 23 15:41:39 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:eb24d6b2070c Feb 23 15:41:39 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:30f4754407c9 (91.3 MB) Feb 23 15:41:41 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:30f4754407c9 Feb 23 15:41:41 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:994531854357 (20.5 MB) Feb 23 15:41:42 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:994531854357 Feb 23 15:41:42 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:e7f53f6db74f (26.4 MB) Feb 23 15:41:43 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:e7f53f6db74f Feb 23 15:41:43 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:c01d2116a7f7 (22.6 MB) Feb 23 15:41:44 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:c01d2116a7f7 Feb 23 15:41:44 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:574ab7bf378e (15.1 MB) Feb 23 15:41:45 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:574ab7bf378e Feb 23 15:41:45 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:7962ef5c32d4 (12.0 MB) Feb 23 15:41:46 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:7962ef5c32d4 Feb 23 15:41:46 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:dfc933091809 (32.1 MB) Feb 23 15:41:47 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:dfc933091809 Feb 23 15:41:47 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:c199a413579a (12.8 MB) Feb 23 15:41:48 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:c199a413579a Feb 23 15:41:48 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:088e0c8df4b0 (12.8 MB) Feb 23 15:41:48 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:088e0c8df4b0 Feb 23 15:41:48 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:3bfb29ca13ee (11.3 MB) Feb 23 15:41:49 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:3bfb29ca13ee Feb 23 15:41:49 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:957489c59539 (25.1 MB) Feb 23 15:41:50 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:957489c59539 Feb 23 15:41:50 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:92cf06eeb6b7 (8.6 MB) Feb 23 15:41:50 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:92cf06eeb6b7 Feb 23 15:41:50 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:038ad3f88c95 (8.8 MB) Feb 23 15:41:51 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:038ad3f88c95 Feb 23 15:41:51 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:0ece45f20033 (7.3 MB) Feb 23 15:41:51 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:0ece45f20033 Feb 23 15:41:51 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:c54074e72a16 (6.7 MB) Feb 23 15:41:52 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:c54074e72a16 Feb 23 15:41:52 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:78eaa076429a (4.5 MB) Feb 23 15:41:52 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:78eaa076429a Feb 23 15:41:52 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:8f755a030059 (4.0 MB) Feb 23 15:41:53 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:8f755a030059 Feb 23 15:41:53 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:061f1b16200e (4.3 MB) Feb 23 15:41:53 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:061f1b16200e Feb 23 15:41:53 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:43d4ce81ad93 (3.6 MB) Feb 23 15:41:53 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:43d4ce81ad93 Feb 23 15:41:53 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:eca37fd95074 (4.1 MB) Feb 23 15:41:54 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:eca37fd95074 Feb 23 15:41:54 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:cdc9620800da (4.3 MB) Feb 23 15:41:54 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:cdc9620800da Feb 23 15:41:54 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:876d4d59495e (3.8 MB) Feb 23 15:41:55 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:876d4d59495e Feb 23 15:41:55 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:f05e72de3bce (3.8 MB) Feb 23 15:41:55 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:f05e72de3bce Feb 23 15:41:55 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:acbb33ff64fe (2.6 MB) Feb 23 15:41:55 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:acbb33ff64fe Feb 23 15:41:55 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:782905a10a37 (4.5 MB) Feb 23 15:41:56 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:782905a10a37 Feb 23 15:41:56 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:3a10002ba2c7 (3.1 MB) Feb 23 15:41:56 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:3a10002ba2c7 Feb 23 15:41:56 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:7034c2ed5b3e (2.9 MB) Feb 23 15:41:57 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:7034c2ed5b3e Feb 23 15:41:57 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:41ff5c8c9fe8 (3.9 MB) Feb 23 15:41:57 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:41ff5c8c9fe8 Feb 23 15:41:57 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:1ace1d0ed067 (1.9 MB) Feb 23 15:41:57 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:1ace1d0ed067 Feb 23 15:41:57 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:f27d666bc37b (3.0 MB) Feb 23 15:41:58 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:f27d666bc37b Feb 23 15:41:58 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:f396700e3145 (3.4 MB) Feb 23 15:41:58 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:f396700e3145 Feb 23 15:41:58 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:22f0bd0de2ec (2.9 MB) Feb 23 15:41:59 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:22f0bd0de2ec Feb 23 15:41:59 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:5792f41dd4f2 (2.2 MB) Feb 23 15:41:59 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:5792f41dd4f2 Feb 23 15:41:59 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:73e3f54fa358 (4.4 MB) Feb 23 15:41:59 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:73e3f54fa358 Feb 23 15:41:59 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:c12dba96c112 (3.0 MB) Feb 23 15:42:00 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:c12dba96c112 Feb 23 15:42:00 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:4ecaebaec5ea (2.9 MB) Feb 23 15:42:00 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:4ecaebaec5ea Feb 23 15:42:00 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:fcb3f2cd175b (7.4 MB) Feb 23 15:42:01 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:fcb3f2cd175b Feb 23 15:42:01 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:cc992b824d43 (2.5 MB) Feb 23 15:42:01 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:cc992b824d43 Feb 23 15:42:01 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:e50f69cf65fe (2.4 MB) Feb 23 15:42:02 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:e50f69cf65fe Feb 23 15:42:02 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:4acf65a2bb44 (3.2 MB) Feb 23 15:42:02 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:4acf65a2bb44 Feb 23 15:42:02 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:316c60956c22 (2.3 MB) Feb 23 15:42:02 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:316c60956c22 Feb 23 15:42:02 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:9f27e6c8f531 (2.0 MB) Feb 23 15:42:03 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:9f27e6c8f531 Feb 23 15:42:03 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:40460eb66ef3 (2.7 MB) Feb 23 15:42:03 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:40460eb66ef3 Feb 23 15:42:03 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:279f8abf5e9d (112.8 MB) Feb 23 15:42:07 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:279f8abf5e9d Feb 23 15:42:07 ip-10-0-136-68 machine-config-daemon[2120]: Fetching ostree chunk sha256:4f92d094360f (1.7 MB) Feb 23 15:42:07 ip-10-0-136-68 machine-config-daemon[2120]: Fetched ostree chunk sha256:4f92d094360f Feb 23 15:42:12 ip-10-0-136-68 systemd[1]: Started OSTree Finalize Staged Deployment. Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: Staging deployment...done Feb 23 15:42:12 ip-10-0-136-68 rpm-ostree[2079]: Created new deployment /ostree/deploy/rhcos/deploy/d136fe99192ef059c08d720e32a39214137706f504d5ddcd2e9df310b9f3791e.0 Feb 23 15:42:12 ip-10-0-136-68 rpm-ostree[2079]: sanitycheck(/usr/bin/true) successful Feb 23 15:42:12 ip-10-0-136-68 rpm-ostree[2079]: Txn Rebase on /org/projectatomic/rpmostree1/rhcos successful Feb 23 15:42:12 ip-10-0-136-68 rpm-ostree[2079]: Unlocked sysroot Feb 23 15:42:12 ip-10-0-136-68 rpm-ostree[2079]: Process [pid: 2120 uid: 0 unit: machine-config-daemon-firstboot.service] disconnected from transaction progress Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: Upgraded: Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: conmon 2:2.1.2-2.rhaos4.11.el8 -> 2:2.1.2-3.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: container-selinux 2:2.188.0-1.rhaos4.12.el8 -> 2:2.188.0-2.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: containernetworking-plugins 1.0.1-5.rhaos4.11.el8 -> 1.0.1-6.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: containers-common 2:1-32.rhaos4.12.el8 -> 2:1-33.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: cri-o 1.25.2-4.rhaos4.12.git66af2f6.el8 -> 1.25.2-6.rhaos4.12.git3c4e50c.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: crun 1.4.2-1.rhaos4.11.el8 -> 1.4.2-2.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: fuse-overlayfs 1.9-1.rhaos4.11.el8 -> 1.9-2.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: git-core 2.31.1-2.el8 -> 2.31.1-3.el8_6 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: grub2-common 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: grub2-efi-x64 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: grub2-pc 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: grub2-pc-modules 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: grub2-tools 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: grub2-tools-extra 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: grub2-tools-minimal 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: libksba 1.3.5-8.el8_6 -> 1.3.5-9.el8_6 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: openshift-clients 4.12.0-202301311516.p0.gb05f7d4.assembly.stream.el8 -> 4.12.0-202301312133.p0.gb05f7d4.assembly.stream.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: openvswitch2.17 2.17.0-67.el8fdp -> 2.17.0-71.el8fdp Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: rpm-ostree 2022.10.99.g0049dbdd-3.el8 -> 2022.10.112.g3d0ac35b-3.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: rpm-ostree-libs 2022.10.99.g0049dbdd-3.el8 -> 2022.10.112.g3d0ac35b-3.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: runc 3:1.1.4-1.rhaos4.12.el8 -> 3:1.1.4-2.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: skopeo 2:1.9.4-1.rhaos4.12.el8 -> 2:1.9.4-2.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: slirp4netns 1.1.8-1.rhaos4.11.el8 -> 1.1.8-2.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: toolbox 0.1.0-1.rhaos4.12.el8 -> 0.1.1-3.rhaos4.12.el8 Feb 23 15:42:12 ip-10-0-136-68 machine-config-daemon[2120]: Changes queued for next boot. Run "systemctl reboot" to start a reboot Feb 23 15:42:12 ip-10-0-136-68 rpm-ostree[2079]: client(id:cli dbus:1.36 unit:machine-config-daemon-firstboot.service uid:0) vanished; remaining=0 Feb 23 15:42:12 ip-10-0-136-68 rpm-ostree[2079]: In idle state; will auto-exit in 64 seconds Feb 23 15:42:13 ip-10-0-136-68 logger[2155]: rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138 Feb 23 15:42:13 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:42:13.018154 2025 update.go:2118] Rebooting node Feb 23 15:42:13 ip-10-0-136-68 root[2156]: machine-config-daemon[2025]: Rebooting node Feb 23 15:42:13 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:42:13.019837 2025 update.go:2148] Removing SIGTERM protection Feb 23 15:42:13 ip-10-0-136-68 machine-config-daemon[2025]: I0223 15:42:13.019889 2025 update.go:2118] initiating reboot: Completing firstboot provisioning to rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138 Feb 23 15:42:13 ip-10-0-136-68 root[2157]: machine-config-daemon[2025]: initiating reboot: Completing firstboot provisioning to rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138 Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Started machine-config-daemon: Completing firstboot provisioning to rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138. Feb 23 15:42:13 ip-10-0-136-68 systemd-logind[1603]: System is rebooting. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping machine-config-daemon: Completing firstboot provisioning to rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: coreos-update-ca-trust.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Run update-ca-trust. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: coreos-update-ca-trust.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: coreos-ignition-firstboot-complete.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped CoreOS Mark Ignition Boot Complete. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: coreos-ignition-firstboot-complete.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping NFS status monitor for NFSv2/3 locking.... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Authorization Manager... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target RPC Port Mapper. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Removed slice machine.slice. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: machine.slice: Consumed 42ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Removed slice system-sshd\x2dkeygen.slice. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: system-sshd\x2dkeygen.slice: Consumed 214ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Timers. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: afterburn.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Afterburn (Metadata). Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: afterburn.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: unbound-anchor.timer: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped daily update of the root trust anchor for DNSSEC. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: lvm2-lvmpolld.socket: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Closed LVM2 poll daemon socket. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: lvm2-lvmpolld.socket: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping rpm-ostree System Management Daemon... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-clean.timer: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Daily Cleanup of Temporary Directories. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: logrotate.timer: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Daily rotation of log files. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Remote Encrypted Volumes. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Login Service... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping irqbalance daemon... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: kubelet-auto-node-size.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Dynamically sets the system reserved for the kubelet. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: kubelet-auto-node-size.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: coreos-generate-iscsi-initiatorname.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped CoreOS Generate iSCSI Initiator Name. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: coreos-generate-iscsi-initiatorname.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping NTP client/server... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Login Prompts. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Serial Getty on ttyS0... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Synchronize afterburn-sshkeys@.service template instances. Feb 23 15:42:13 ip-10-0-136-68 chronyd[1534]: chronyd exiting Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: console-login-helper-messages-gensnippet-ssh-keys.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Generate SSH keys snippet for display via console-login-helper-messages. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: console-login-helper-messages-gensnippet-ssh-keys.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping OpenSSH server daemon... Feb 23 15:42:13 ip-10-0-136-68 sshd[1774]: Received signal 15; terminating. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Restore /run/initramfs on shutdown... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Getty on tty1... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: irqbalance.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped irqbalance daemon. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: irqbalance.service: Consumed 6ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: sshd.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped OpenSSH server daemon. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: sshd.service: Consumed 6ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: serial-getty@ttyS0.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Serial Getty on ttyS0. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: serial-getty@ttyS0.service: Consumed 8ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: getty@tty1.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Getty on tty1. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: getty@tty1.service: Consumed 84ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: rpc-statd.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped NFS status monitor for NFSv2/3 locking.. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: rpc-statd.service: Consumed 17ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: machine-config-daemon-firstboot.service: Main process exited, code=killed, status=15/TERM Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: machine-config-daemon-firstboot.service: Failed with result 'signal'. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Machine Config Daemon Firstboot. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: machine-config-daemon-firstboot.service: Consumed 5.063s CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped rpm-ostree System Management Daemon. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Consumed 23.995s CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: polkit.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Authorization Manager. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: polkit.service: Consumed 31ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: machine-config-daemon-reboot.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped machine-config-daemon: Completing firstboot provisioning to rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: machine-config-daemon-reboot.service: Consumed 6ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: dracut-shutdown.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Restore /run/initramfs on shutdown. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: dracut-shutdown.service: Consumed 2ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: machine-config-daemon-pull.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Machine Config Daemon Pull. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: machine-config-daemon-pull.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Network is Online. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: node-valid-hostname.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Wait for a non-localhost hostname. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: node-valid-hostname.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: NetworkManager-wait-online.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Network Manager Wait Online. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: NetworkManager-wait-online.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Host and Network Name Lookups. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Removed slice system-getty.slice. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: system-getty.slice: Consumed 84ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Permit User Sessions... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Removed slice system-serial\x2dgetty.slice. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: system-serial\x2dgetty.slice: Consumed 8ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target sshd-keygen.target. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: systemd-logind.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Login Service. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: systemd-logind.service: Consumed 39ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: chronyd.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped NTP client/server. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: chronyd.service: Consumed 37ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: systemd-user-sessions.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Permit User Sessions. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: systemd-user-sessions.service: Consumed 7ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Network. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Network Manager... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: coreos-ignition-write-issues.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 NetworkManager[1770]: [1677166933.1986] caught SIGTERM, shutting down normally. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Create Ignition Status Issue Files. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: coreos-ignition-write-issues.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target User and Group Name Lookups. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Remote File Systems. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping System Security Services Daemon... Feb 23 15:42:13 ip-10-0-136-68 sssd_nss[1681]: Shutting down (status = 0) Feb 23 15:42:13 ip-10-0-136-68 NetworkManager[1770]: [1677166933.2052] dhcp4 (ens5): canceled DHCP transaction Feb 23 15:42:13 ip-10-0-136-68 NetworkManager[1770]: [1677166933.2053] dhcp4 (ens5): activation: beginning transaction (timeout in 45 seconds) Feb 23 15:42:13 ip-10-0-136-68 NetworkManager[1770]: [1677166933.2053] dhcp4 (ens5): state changed no lease Feb 23 15:42:13 ip-10-0-136-68 NetworkManager[1770]: [1677166933.2054] manager: NetworkManager state is now CONNECTED_SITE Feb 23 15:42:13 ip-10-0-136-68 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.11' (uid=0 pid=1770 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 15:42:13 ip-10-0-136-68 dbus-daemon[1517]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 15:42:13 ip-10-0-136-68 NetworkManager[1770]: [1677166933.2070] exiting (success) Feb 23 15:42:13 ip-10-0-136-68 sssd_be[1676]: Shutting down (status = 0) Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: sssd.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped System Security Services Daemon. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: sssd.service: Consumed 75ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: NetworkManager.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Network Manager. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: NetworkManager.service: Consumed 48ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping D-Bus System Message Bus... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: openvswitch.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: openvswitch.service: Consumed 1ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch Forwarding Unit... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: dbus.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped D-Bus System Message Bus. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: dbus.service: Consumed 90ms CPU time Feb 23 15:42:13 ip-10-0-136-68 ovs-vswitchd[1756]: ovs|00009|memory|INFO|169992 kB peak resident set size after 79.0 seconds Feb 23 15:42:13 ip-10-0-136-68 ovs-vswitchd[1756]: ovs|00010|memory|INFO|idl-cells:17 Feb 23 15:42:13 ip-10-0-136-68 ovs-ctl[2187]: Exiting ovs-vswitchd (1756). Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: ovs-vswitchd.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Forwarding Unit. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: ovs-vswitchd.service: Consumed 151ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: ovs-delete-transient-ports.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Delete Transient Ports. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: ovs-delete-transient-ports.service: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch Database Unit... Feb 23 15:42:13 ip-10-0-136-68 ovs-ctl[2209]: Exiting ovsdb-server (1662). Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: ovsdb-server.service: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Database Unit. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: ovsdb-server.service: Consumed 163ms CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Basic System. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Slices. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Removed slice User and Session Slice. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: user.slice: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Sockets. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: dbus.socket: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Closed D-Bus System Message Bus Socket. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: dbus.socket: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: bootupd.socket: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Closed bootupd.socket. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: bootupd.socket: Consumed 0 CPU time Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopping OSTree Finalize Staged Deployment... Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Paths. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.path: Succeeded. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped Monitor console-login-helper-messages runtime issue snippets directory for changes. Feb 23 15:42:13 ip-10-0-136-68 systemd[1]: Stopped target Network (Pre). Feb 23 15:42:13 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 15:42:13 ip-10-0-136-68 ostree[2229]: Finalizing staged deployment Feb 23 15:42:15 ip-10-0-136-68 ostree[2229]: Copying /etc changes: 13 modified, 0 removed, 118 added Feb 23 15:42:15 ip-10-0-136-68 ostree[2229]: Copying /etc changes: 13 modified, 0 removed, 118 added Feb 23 15:42:18 ip-10-0-136-68 ostree[2229]: Bootloader updated; bootconfig swap: yes; bootversion: boot.0.1, deployment count change: 1 Feb 23 15:42:18 ip-10-0-136-68 ostree[2229]: Bootloader updated; bootconfig swap: yes; bootversion: boot.0.1, deployment count change: 1 Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped OSTree Finalize Staged Deployment. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.service: Consumed 1.655s CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.path: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped OSTree Monitor Staged Deployment. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped target System Initialization. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopping Load/Save Random Seed... Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopping Update UTMP about System Boot/Shutdown... Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-update-done.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Update is Completed. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-update-done.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-journal-catalog-update.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Rebuild Journal Catalog. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-journal-catalog-update.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Apply Kernel Variables. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-sysctl.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: coreos-printk-quiet.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped CoreOS: Set printk To Level 4 (warn). Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: coreos-printk-quiet.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Load Kernel Modules. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-modules-load.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-ask-password-wall.path: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Forward Password Requests to Wall Directory Watch. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped target Local Encrypted Volumes (Pre). Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-ask-password-console.path: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-hwdb-update.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Rebuild Hardware Database. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-hwdb-update.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: ldconfig.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Rebuild Dynamic Linker Cache. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: ldconfig.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-random-seed.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Load/Save Random Seed. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-random-seed.service: Consumed 3ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-update-utmp.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Update UTMP about System Boot/Shutdown. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-update-utmp.service: Consumed 4ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopping Security Auditing Service... Feb 23 15:42:18 ip-10-0-136-68 auditd[1453]: The audit daemon is exiting. Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1305 audit(1677166938.668:154): op=set audit_pid=0 old=1453 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: auditd.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Security Auditing Service. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: auditd.service: Consumed 34ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1130 audit(1677166938.681:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1131 audit(1677166938.681:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1130 audit(1677166938.682:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1131 audit(1677166938.682:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-journal-flush.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Flush Journal to Persistent Storage. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-journal-flush.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems. Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1130 audit(1677166938.684:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1131 audit(1677166938.684:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounting /etc... Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay... Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: ostree-remount.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped OSTree Remount OS/ Bind Mounts. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: ostree-remount.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1130 audit(1677166938.688:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1131 audit(1677166938.688:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounting CoreOS Dynamic Mount for /boot... Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounting Temporary Directory (/tmp)... Feb 23 15:42:18 ip-10-0-136-68 umount[2246]: umount: /etc: target is busy. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: etc.mount: Mount process exited, code=exited status=32 Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Failed unmounting /etc. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 3ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: tmp.mount: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounted Temporary Directory (/tmp). Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: tmp.mount: Consumed 3ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped target Swap. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounting /var... Feb 23 15:42:18 ip-10-0-136-68 umount[2253]: umount: /var: target is busy. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: var.mount: Mount process exited, code=exited status=32 Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Failed unmounting /var. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounting sysroot-ostree-deploy-rhcos-var.mount... Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounting sysroot.mount... Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: boot.mount: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounted CoreOS Dynamic Mount for /boot. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: boot.mount: Consumed 25ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Removed slice system-systemd\x2dfsck.slice. Feb 23 15:42:18 ip-10-0-136-68 kernel: audit: type=1130 audit(1677166938.727:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: system-systemd\x2dfsck.slice: Consumed 9ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounted sysroot-ostree-deploy-rhcos-var.mount. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: sysroot-ostree-deploy-rhcos-var.mount: Consumed 1ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: sysroot.mount: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Unmounted sysroot.mount. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: sysroot.mount: Consumed 1ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Reached target Unmount All Filesystems. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems (Pre). Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopping Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup-dev.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-sysusers.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Create System Users. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-sysusers.service: Consumed 0 CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: lvm2-monitor.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Stopped Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: lvm2-monitor.service: Consumed 12ms CPU time Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Reached target Shutdown. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Reached target Final Step. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: systemd-reboot.service: Succeeded. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Started Reboot. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Reached target Reboot. Feb 23 15:42:18 ip-10-0-136-68 systemd[1]: Shutting down. Feb 23 15:42:18 ip-10-0-136-68 systemd-shutdown[1]: Syncing filesystems and block devices. Feb 23 15:42:18 ip-10-0-136-68 systemd-shutdown[1]: Sending SIGTERM to remaining processes... Feb 23 15:42:18 ip-10-0-136-68 systemd-journald[1347]: Journal stopped -- Boot 231c4e3408e74aab9038a4e74720cf09 -- Feb 23 15:42:31 localhost kernel: Linux version 4.18.0-372.43.1.el8_6.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc version 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC)) #1 SMP Fri Jan 27 00:24:08 EST 2023 Feb 23 15:42:31 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-17178804c159fc6199cab178bdafa30fd5fde653d57f312469b0b5e206a2a4f4/vmlinuz-4.18.0-372.43.1.el8_6.x86_64 ostree=/ostree/boot.0/rhcos/17178804c159fc6199cab178bdafa30fd5fde653d57f312469b0b5e206a2a4f4/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f Feb 23 15:42:31 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 23 15:42:31 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 23 15:42:31 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 23 15:42:31 localhost kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 23 15:42:31 localhost kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 23 15:42:31 localhost kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 23 15:42:31 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 23 15:42:31 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 23 15:42:31 localhost kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 23 15:42:31 localhost kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 23 15:42:31 localhost kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 23 15:42:31 localhost kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Feb 23 15:42:31 localhost kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Feb 23 15:42:31 localhost kernel: signal: max sigframe size: 3632 Feb 23 15:42:31 localhost kernel: BIOS-provided physical RAM map: Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffe8fff] usable Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x00000000bffe9000-0x00000000bfffffff] reserved Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000042effffff] usable Feb 23 15:42:31 localhost kernel: BIOS-e820: [mem 0x000000042f000000-0x000000043fffffff] reserved Feb 23 15:42:31 localhost kernel: NX (Execute Disable) protection: active Feb 23 15:42:31 localhost kernel: SMBIOS 2.7 present. Feb 23 15:42:31 localhost kernel: DMI: Amazon EC2 m6i.xlarge/, BIOS 1.0 10/16/2017 Feb 23 15:42:31 localhost kernel: Hypervisor detected: KVM Feb 23 15:42:31 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 23 15:42:31 localhost kernel: kvm-clock: cpu 0, msr 340a01001, primary cpu clock Feb 23 15:42:31 localhost kernel: kvm-clock: using sched offset of 7545482535 cycles Feb 23 15:42:31 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 23 15:42:31 localhost kernel: tsc: Detected 2899.998 MHz processor Feb 23 15:42:31 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 23 15:42:31 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 23 15:42:31 localhost kernel: last_pfn = 0x42f000 max_arch_pfn = 0x400000000 Feb 23 15:42:31 localhost kernel: MTRR default type: write-back Feb 23 15:42:31 localhost kernel: MTRR fixed ranges enabled: Feb 23 15:42:31 localhost kernel: 00000-9FFFF write-back Feb 23 15:42:31 localhost kernel: A0000-BFFFF uncachable Feb 23 15:42:31 localhost kernel: C0000-FFFFF write-protect Feb 23 15:42:31 localhost kernel: MTRR variable ranges enabled: Feb 23 15:42:31 localhost kernel: 0 base 0000C0000000 mask 3FFFC0000000 uncachable Feb 23 15:42:31 localhost kernel: 1 disabled Feb 23 15:42:31 localhost kernel: 2 disabled Feb 23 15:42:31 localhost kernel: 3 disabled Feb 23 15:42:31 localhost kernel: 4 disabled Feb 23 15:42:31 localhost kernel: 5 disabled Feb 23 15:42:31 localhost kernel: 6 disabled Feb 23 15:42:31 localhost kernel: 7 disabled Feb 23 15:42:31 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 23 15:42:31 localhost kernel: last_pfn = 0xbffe9 max_arch_pfn = 0x400000000 Feb 23 15:42:31 localhost kernel: Using GB pages for direct mapping Feb 23 15:42:31 localhost kernel: BRK [0x340c01000, 0x340c01fff] PGTABLE Feb 23 15:42:31 localhost kernel: BRK [0x340c02000, 0x340c02fff] PGTABLE Feb 23 15:42:31 localhost kernel: BRK [0x340c03000, 0x340c03fff] PGTABLE Feb 23 15:42:31 localhost kernel: BRK [0x340c04000, 0x340c04fff] PGTABLE Feb 23 15:42:31 localhost kernel: BRK [0x340c05000, 0x340c05fff] PGTABLE Feb 23 15:42:31 localhost kernel: BRK [0x340c06000, 0x340c06fff] PGTABLE Feb 23 15:42:31 localhost kernel: BRK [0x340c07000, 0x340c07fff] PGTABLE Feb 23 15:42:31 localhost kernel: RAMDISK: [mem 0x2d068000-0x3282bfff] Feb 23 15:42:31 localhost kernel: ACPI: Early table checksum verification disabled Feb 23 15:42:31 localhost kernel: ACPI: RSDP 0x00000000000F8F00 000014 (v00 AMAZON) Feb 23 15:42:31 localhost kernel: ACPI: RSDT 0x00000000BFFEE180 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: WAET 0x00000000BFFEFFC0 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: SLIT 0x00000000BFFEFF40 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: APIC 0x00000000BFFEFE80 000086 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: SRAT 0x00000000BFFEFDC0 0000C0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: FACP 0x00000000BFFEFC80 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: DSDT 0x00000000BFFEEAC0 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: FACS 0x00000000000F8EC0 000040 Feb 23 15:42:31 localhost kernel: ACPI: HPET 0x00000000BFFEFC40 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: SSDT 0x00000000BFFEE280 00081F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: SSDT 0x00000000BFFEE200 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 23 15:42:31 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffeffc0-0xbffeffe7] Feb 23 15:42:31 localhost kernel: ACPI: Reserving SLIT table memory at [mem 0xbffeff40-0xbffeffab] Feb 23 15:42:31 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffefe80-0xbffeff05] Feb 23 15:42:31 localhost kernel: ACPI: Reserving SRAT table memory at [mem 0xbffefdc0-0xbffefe7f] Feb 23 15:42:31 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffefc80-0xbffefd93] Feb 23 15:42:31 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffeeac0-0xbffefc19] Feb 23 15:42:31 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xf8ec0-0xf8eff] Feb 23 15:42:31 localhost kernel: ACPI: Reserving HPET table memory at [mem 0xbffefc40-0xbffefc77] Feb 23 15:42:31 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0xbffee280-0xbffeea9e] Feb 23 15:42:31 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0xbffee200-0xbffee27e] Feb 23 15:42:31 localhost kernel: ACPI: Local APIC address 0xfee00000 Feb 23 15:42:31 localhost kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 23 15:42:31 localhost kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 23 15:42:31 localhost kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 23 15:42:31 localhost kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 23 15:42:31 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0xbfffffff] Feb 23 15:42:31 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x43fffffff] Feb 23 15:42:31 localhost kernel: NUMA: Initialized distance table, cnt=1 Feb 23 15:42:31 localhost kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x42effffff] -> [mem 0x00000000-0x42effffff] Feb 23 15:42:31 localhost kernel: NODE_DATA(0) allocated [mem 0x42efd4000-0x42effefff] Feb 23 15:42:31 localhost kernel: Zone ranges: Feb 23 15:42:31 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 23 15:42:31 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 23 15:42:31 localhost kernel: Normal [mem 0x0000000100000000-0x000000042effffff] Feb 23 15:42:31 localhost kernel: Device empty Feb 23 15:42:31 localhost kernel: Movable zone start for each node Feb 23 15:42:31 localhost kernel: Early memory node ranges Feb 23 15:42:31 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 23 15:42:31 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffe8fff] Feb 23 15:42:31 localhost kernel: node 0: [mem 0x0000000100000000-0x000000042effffff] Feb 23 15:42:31 localhost kernel: Zeroed struct page in unavailable ranges: 4217 pages Feb 23 15:42:31 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000042effffff] Feb 23 15:42:31 localhost kernel: On node 0 totalpages: 4124551 Feb 23 15:42:31 localhost kernel: DMA zone: 64 pages used for memmap Feb 23 15:42:31 localhost kernel: DMA zone: 158 pages reserved Feb 23 15:42:31 localhost kernel: DMA zone: 3998 pages, LIFO batch:0 Feb 23 15:42:31 localhost kernel: DMA32 zone: 12224 pages used for memmap Feb 23 15:42:31 localhost kernel: DMA32 zone: 782313 pages, LIFO batch:63 Feb 23 15:42:31 localhost kernel: Normal zone: 52160 pages used for memmap Feb 23 15:42:31 localhost kernel: Normal zone: 3338240 pages, LIFO batch:63 Feb 23 15:42:31 localhost kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 23 15:42:31 localhost kernel: ACPI: Local APIC address 0xfee00000 Feb 23 15:42:31 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 23 15:42:31 localhost kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 23 15:42:31 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 23 15:42:31 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 23 15:42:31 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 23 15:42:31 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 23 15:42:31 localhost kernel: ACPI: IRQ5 used by override. Feb 23 15:42:31 localhost kernel: ACPI: IRQ9 used by override. Feb 23 15:42:31 localhost kernel: ACPI: IRQ10 used by override. Feb 23 15:42:31 localhost kernel: ACPI: IRQ11 used by override. Feb 23 15:42:31 localhost kernel: Using ACPI (MADT) for SMP configuration information Feb 23 15:42:31 localhost kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 23 15:42:31 localhost kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0x00000000-0x00000fff] Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0x000a0000-0x000effff] Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0xbffe9000-0xbfffffff] Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0xc0000000-0xdfffffff] Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0xe0000000-0xe03fffff] Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0xe0400000-0xfffbffff] Feb 23 15:42:31 localhost kernel: PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Feb 23 15:42:31 localhost kernel: [mem 0xc0000000-0xdfffffff] available for PCI devices Feb 23 15:42:31 localhost kernel: Booting paravirtualized kernel on KVM Feb 23 15:42:31 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 23 15:42:31 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 23 15:42:31 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u524288 Feb 23 15:42:31 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u524288 alloc=1*2097152 Feb 23 15:42:31 localhost kernel: pcpu-alloc: [0] 0 1 2 3 Feb 23 15:42:31 localhost kernel: kvm-guest: stealtime: cpu 0, msr 41f02d080 Feb 23 15:42:31 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Feb 23 15:42:31 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4059945 Feb 23 15:42:31 localhost kernel: Policy zone: Normal Feb 23 15:42:31 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-17178804c159fc6199cab178bdafa30fd5fde653d57f312469b0b5e206a2a4f4/vmlinuz-4.18.0-372.43.1.el8_6.x86_64 ostree=/ostree/boot.0/rhcos/17178804c159fc6199cab178bdafa30fd5fde653d57f312469b0b5e206a2a4f4/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f Feb 23 15:42:31 localhost kernel: Specific versions of hardware are certified with Red Hat Enterprise Linux 8. Please see the list of hardware certified with Red Hat Enterprise Linux 8 at https://catalog.redhat.com. Feb 23 15:42:31 localhost kernel: Memory: 3120276K/16498204K available (12293K kernel code, 5866K rwdata, 8296K rodata, 2540K init, 14320K bss, 467240K reserved, 0K cma-reserved) Feb 23 15:42:31 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 23 15:42:31 localhost kernel: ftrace: allocating 40026 entries in 157 pages Feb 23 15:42:31 localhost kernel: ftrace: allocated 157 pages with 5 groups Feb 23 15:42:31 localhost kernel: rcu: Hierarchical RCU implementation. Feb 23 15:42:31 localhost kernel: rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4. Feb 23 15:42:31 localhost kernel: Rude variant of Tasks RCU enabled. Feb 23 15:42:31 localhost kernel: Tracing variant of Tasks RCU enabled. Feb 23 15:42:31 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 23 15:42:31 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 23 15:42:31 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16 Feb 23 15:42:31 localhost kernel: random: crng done (trusting CPU's manufacturer) Feb 23 15:42:31 localhost kernel: Console: colour VGA+ 80x25 Feb 23 15:42:31 localhost kernel: printk: console [tty0] enabled Feb 23 15:42:31 localhost kernel: printk: console [ttyS0] enabled Feb 23 15:42:31 localhost kernel: ACPI: Core revision 20210604 Feb 23 15:42:31 localhost kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 23 15:42:31 localhost kernel: APIC: Switch to symmetric I/O mode setup Feb 23 15:42:31 localhost kernel: x2apic enabled Feb 23 15:42:31 localhost kernel: Switched APIC routing to physical x2apic. Feb 23 15:42:31 localhost kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x29cd4133323, max_idle_ns: 440795296220 ns Feb 23 15:42:31 localhost kernel: Calibrating delay loop (skipped) preset value.. 5799.99 BogoMIPS (lpj=2899998) Feb 23 15:42:31 localhost kernel: pid_max: default: 32768 minimum: 301 Feb 23 15:42:31 localhost kernel: LSM: Security Framework initializing Feb 23 15:42:31 localhost kernel: Yama: becoming mindful. Feb 23 15:42:31 localhost kernel: SELinux: Initializing. Feb 23 15:42:31 localhost kernel: LSM support for eBPF active Feb 23 15:42:31 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: x86/tme: enabled by BIOS Feb 23 15:42:31 localhost kernel: x86/mktme: No known encryption algorithm is supported: 0x0 Feb 23 15:42:31 localhost kernel: x86/mktme: disabled by BIOS Feb 23 15:42:31 localhost kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 23 15:42:31 localhost kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 23 15:42:31 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 23 15:42:31 localhost kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 23 15:42:31 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 23 15:42:31 localhost kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 23 15:42:31 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 23 15:42:31 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 23 15:42:31 localhost kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 23 15:42:31 localhost kernel: Freeing SMP alternatives memory: 36K Feb 23 15:42:31 localhost kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1235 Feb 23 15:42:31 localhost kernel: TSC deadline timer enabled Feb 23 15:42:31 localhost kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz (family: 0x6, model: 0x6a, stepping: 0x6) Feb 23 15:42:31 localhost kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Feb 23 15:42:31 localhost kernel: rcu: Hierarchical SRCU implementation. Feb 23 15:42:31 localhost kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 23 15:42:31 localhost kernel: smp: Bringing up secondary CPUs ... Feb 23 15:42:31 localhost kernel: x86: Booting SMP configuration: Feb 23 15:42:31 localhost kernel: .... node #0, CPUs: #1 Feb 23 15:42:31 localhost kernel: kvm-clock: cpu 1, msr 340a01041, secondary cpu clock Feb 23 15:42:31 localhost kernel: kvm-guest: stealtime: cpu 1, msr 41f0ad080 Feb 23 15:42:31 localhost kernel: #2 Feb 23 15:42:31 localhost kernel: kvm-clock: cpu 2, msr 340a01081, secondary cpu clock Feb 23 15:42:31 localhost kernel: kvm-guest: stealtime: cpu 2, msr 41f12d080 Feb 23 15:42:31 localhost kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 23 15:42:31 localhost kernel: #3 Feb 23 15:42:31 localhost kernel: kvm-clock: cpu 3, msr 340a010c1, secondary cpu clock Feb 23 15:42:31 localhost kernel: kvm-guest: stealtime: cpu 3, msr 41f1ad080 Feb 23 15:42:31 localhost kernel: smp: Brought up 1 node, 4 CPUs Feb 23 15:42:31 localhost kernel: smpboot: Max logical packages: 1 Feb 23 15:42:31 localhost kernel: smpboot: Total of 4 processors activated (23199.98 BogoMIPS) Feb 23 15:42:31 localhost kernel: node 0 deferred pages initialised in 22ms Feb 23 15:42:31 localhost kernel: devtmpfs: initialized Feb 23 15:42:31 localhost kernel: x86/mm: Memory block size: 128MB Feb 23 15:42:31 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 23 15:42:31 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: pinctrl core: initialized pinctrl subsystem Feb 23 15:42:31 localhost kernel: NET: Registered protocol family 16 Feb 23 15:42:31 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Feb 23 15:42:31 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 23 15:42:31 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 23 15:42:31 localhost kernel: audit: initializing netlink subsys (disabled) Feb 23 15:42:31 localhost kernel: audit: type=2000 audit(1677166947.813:1): state=initialized audit_enabled=0 res=1 Feb 23 15:42:31 localhost kernel: cpuidle: using governor menu Feb 23 15:42:31 localhost kernel: ACPI: bus type PCI registered Feb 23 15:42:31 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 23 15:42:31 localhost kernel: PCI: Using configuration type 1 for base access Feb 23 15:42:31 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 23 15:42:31 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 23 15:42:31 localhost kernel: cryptd: max_cpu_qlen set to 1000 Feb 23 15:42:31 localhost kernel: ACPI: Added _OSI(Module Device) Feb 23 15:42:31 localhost kernel: ACPI: Added _OSI(Processor Device) Feb 23 15:42:31 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 23 15:42:31 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 23 15:42:31 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 23 15:42:31 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 23 15:42:31 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 23 15:42:31 localhost kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 23 15:42:31 localhost kernel: ACPI: Interpreter enabled Feb 23 15:42:31 localhost kernel: ACPI: PM: (supports S0 S4 S5) Feb 23 15:42:31 localhost kernel: ACPI: Using IOAPIC for interrupt routing Feb 23 15:42:31 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 23 15:42:31 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 23 15:42:31 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 23 15:42:31 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI EDR HPX-Type3] Feb 23 15:42:31 localhost kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 23 15:42:31 localhost kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 23 15:42:31 localhost kernel: acpiphp: Slot [3] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [4] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [5] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [6] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [7] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [8] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [9] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [10] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [11] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [12] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [13] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [14] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [15] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [16] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [17] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [18] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [19] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [20] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [21] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [22] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [23] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [24] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [25] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [26] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [27] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [28] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [29] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [30] registered Feb 23 15:42:31 localhost kernel: acpiphp: Slot [31] registered Feb 23 15:42:31 localhost kernel: PCI host bridge to bus 0000:00 Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x20043fffffff window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 23 15:42:31 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 23 15:42:31 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 23 15:42:31 localhost kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 35156 usecs Feb 23 15:42:31 localhost kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 23 15:42:31 localhost kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 23 15:42:31 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 23 15:42:31 localhost kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 23 15:42:31 localhost kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 23 15:42:31 localhost kernel: pci 0000:00:04.0: enabling Extended Tags Feb 23 15:42:31 localhost kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 23 15:42:31 localhost kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf5fff] Feb 23 15:42:31 localhost kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf6000-0xfebf7fff] Feb 23 15:42:31 localhost kernel: pci 0000:00:05.0: reg 0x18: [mem 0xfe800000-0xfe87ffff pref] Feb 23 15:42:31 localhost kernel: pci 0000:00:05.0: enabling Extended Tags Feb 23 15:42:31 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 23 15:42:31 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 23 15:42:31 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 23 15:42:31 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 23 15:42:31 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 23 15:42:31 localhost kernel: iommu: Default domain type: Passthrough Feb 23 15:42:31 localhost kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 23 15:42:31 localhost kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 23 15:42:31 localhost kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 23 15:42:31 localhost kernel: vgaarb: loaded Feb 23 15:42:31 localhost kernel: SCSI subsystem initialized Feb 23 15:42:31 localhost kernel: ACPI: bus type USB registered Feb 23 15:42:31 localhost kernel: usbcore: registered new interface driver usbfs Feb 23 15:42:31 localhost kernel: usbcore: registered new interface driver hub Feb 23 15:42:31 localhost kernel: usbcore: registered new device driver usb Feb 23 15:42:31 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Feb 23 15:42:31 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 23 15:42:31 localhost kernel: PTP clock support registered Feb 23 15:42:31 localhost kernel: EDAC MC: Ver: 3.0.0 Feb 23 15:42:31 localhost kernel: PCI: Using ACPI for IRQ routing Feb 23 15:42:31 localhost kernel: PCI: pci_cache_line_size set to 64 bytes Feb 23 15:42:31 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 23 15:42:31 localhost kernel: e820: reserve RAM buffer [mem 0xbffe9000-0xbfffffff] Feb 23 15:42:31 localhost kernel: e820: reserve RAM buffer [mem 0x42f000000-0x42fffffff] Feb 23 15:42:31 localhost kernel: NetLabel: Initializing Feb 23 15:42:31 localhost kernel: NetLabel: domain hash size = 128 Feb 23 15:42:31 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Feb 23 15:42:31 localhost kernel: NetLabel: unlabeled traffic allowed by default Feb 23 15:42:31 localhost kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 23 15:42:31 localhost kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 23 15:42:31 localhost kernel: clocksource: Switched to clocksource kvm-clock Feb 23 15:42:31 localhost kernel: VFS: Disk quotas dquot_6.6.0 Feb 23 15:42:31 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 23 15:42:31 localhost kernel: pnp: PnP ACPI init Feb 23 15:42:31 localhost kernel: pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) Feb 23 15:42:31 localhost kernel: pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active) Feb 23 15:42:31 localhost kernel: pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active) Feb 23 15:42:31 localhost kernel: pnp 00:03: Plug and Play ACPI device, IDs PNP0400 (active) Feb 23 15:42:31 localhost kernel: pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active) Feb 23 15:42:31 localhost kernel: pnp: PnP ACPI: found 5 devices Feb 23 15:42:31 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Feb 23 15:42:31 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x20043fffffff window] Feb 23 15:42:31 localhost kernel: NET: Registered protocol family 2 Feb 23 15:42:31 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Feb 23 15:42:31 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, vmalloc) Feb 23 15:42:31 localhost kernel: NET: Registered protocol family 1 Feb 23 15:42:31 localhost kernel: NET: Registered protocol family 44 Feb 23 15:42:31 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 23 15:42:31 localhost kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 23 15:42:31 localhost kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 23 15:42:31 localhost kernel: PCI: CLS 0 bytes, default 64 Feb 23 15:42:31 localhost kernel: Unpacking initramfs... Feb 23 15:42:31 localhost kernel: Freeing initrd memory: 89872K Feb 23 15:42:31 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 23 15:42:31 localhost kernel: software IO TLB: mapped [mem 0x00000000bbfe9000-0x00000000bffe9000] (64MB) Feb 23 15:42:31 localhost kernel: ACPI: bus type thunderbolt registered Feb 23 15:42:31 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x29cd4133323, max_idle_ns: 440795296220 ns Feb 23 15:42:31 localhost kernel: clocksource: Switched to clocksource tsc Feb 23 15:42:31 localhost kernel: Initialise system trusted keyrings Feb 23 15:42:31 localhost kernel: Key type blacklist registered Feb 23 15:42:31 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Feb 23 15:42:31 localhost kernel: zbud: loaded Feb 23 15:42:31 localhost kernel: pstore: using deflate compression Feb 23 15:42:31 localhost kernel: Platform Keyring initialized Feb 23 15:42:31 localhost kernel: NET: Registered protocol family 38 Feb 23 15:42:31 localhost kernel: Key type asymmetric registered Feb 23 15:42:31 localhost kernel: Asymmetric key parser 'x509' registered Feb 23 15:42:31 localhost kernel: Running certificate verification selftests Feb 23 15:42:31 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Feb 23 15:42:31 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247) Feb 23 15:42:31 localhost kernel: io scheduler mq-deadline registered Feb 23 15:42:31 localhost kernel: io scheduler kyber registered Feb 23 15:42:31 localhost kernel: io scheduler bfq registered Feb 23 15:42:31 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Feb 23 15:42:31 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Feb 23 15:42:31 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Feb 23 15:42:31 localhost kernel: ACPI: Power Button [PWRF] Feb 23 15:42:31 localhost kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1 Feb 23 15:42:31 localhost kernel: ACPI: Sleep Button [SLPF] Feb 23 15:42:31 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 23 15:42:31 localhost kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 23 15:42:31 localhost kernel: Non-volatile memory driver v1.3 Feb 23 15:42:31 localhost kernel: rdac: device handler registered Feb 23 15:42:31 localhost kernel: hp_sw: device handler registered Feb 23 15:42:31 localhost kernel: emc: device handler registered Feb 23 15:42:31 localhost kernel: alua: device handler registered Feb 23 15:42:31 localhost kernel: libphy: Fixed MDIO Bus: probed Feb 23 15:42:31 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 23 15:42:31 localhost kernel: ehci-pci: EHCI PCI platform driver Feb 23 15:42:31 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Feb 23 15:42:31 localhost kernel: ohci-pci: OHCI PCI platform driver Feb 23 15:42:31 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 23 15:42:31 localhost kernel: usbcore: registered new interface driver usbserial_generic Feb 23 15:42:31 localhost kernel: usbserial: USB Serial support registered for generic Feb 23 15:42:31 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 23 15:42:31 localhost kernel: i8042: Warning: Keylock active Feb 23 15:42:31 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 23 15:42:31 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 23 15:42:31 localhost kernel: mousedev: PS/2 mouse device common for all mice Feb 23 15:42:31 localhost kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 23 15:42:31 localhost kernel: rtc_cmos 00:00: registered as rtc0 Feb 23 15:42:31 localhost kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 23 15:42:31 localhost kernel: intel_pstate: Intel P-state driver initializing Feb 23 15:42:31 localhost kernel: unchecked MSR access error: WRMSR to 0x199 (tried to write 0x0000000000000800) at rIP: 0xffffffffb0471164 (native_write_msr+0x4/0x20) Feb 23 15:42:31 localhost kernel: Call Trace: Feb 23 15:42:31 localhost kernel: Feb 23 15:42:31 localhost kernel: __wrmsr_on_cpu+0x33/0x40 Feb 23 15:42:31 localhost kernel: flush_smp_call_function_queue+0x35/0xe0 Feb 23 15:42:31 localhost kernel: smp_call_function_single_interrupt+0x3a/0xd0 Feb 23 15:42:31 localhost kernel: call_function_single_interrupt+0xf/0x20 Feb 23 15:42:31 localhost kernel: Feb 23 15:42:31 localhost kernel: RIP: 0010:native_safe_halt+0xe/0x20 Feb 23 15:42:31 localhost kernel: Code: 00 f0 80 48 02 20 48 8b 00 a8 08 75 c0 e9 79 ff ff ff 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 0f 00 2d 06 03 43 00 fb f4 bd 75 22 00 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 0f 1f 44 00 Feb 23 15:42:31 localhost kernel: RSP: 0018:ffffffffb1c03e10 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff04 Feb 23 15:42:31 localhost kernel: RAX: 0000000080004000 RBX: 0000000000000001 RCX: ffff8ab25f02bc40 Feb 23 15:42:31 localhost kernel: RDX: 0000000000000001 RSI: ffffffffb20c3840 RDI: ffff8ab180cc6464 Feb 23 15:42:31 localhost kernel: RBP: ffff8ab180cc6464 R08: 0000000000000001 R09: ffff8ab180cc6400 Feb 23 15:42:31 localhost kernel: R10: 000000000000e00a R11: ffff8ab25f029b84 R12: 0000000000000001 Feb 23 15:42:31 localhost kernel: R13: ffffffffb20c3840 R14: 0000000000000001 R15: 0000000000000001 Feb 23 15:42:31 localhost kernel: acpi_idle_do_entry+0x4a/0x60 Feb 23 15:42:31 localhost kernel: acpi_idle_enter+0x5a/0xd0 Feb 23 15:42:31 localhost kernel: cpuidle_enter_state+0x86/0x3d0 Feb 23 15:42:31 localhost kernel: cpuidle_enter+0x2c/0x40 Feb 23 15:42:31 localhost kernel: do_idle+0x268/0x2d0 Feb 23 15:42:31 localhost kernel: cpu_startup_entry+0x6f/0x80 Feb 23 15:42:31 localhost kernel: start_kernel+0x522/0x546 Feb 23 15:42:31 localhost kernel: secondary_startup_64_no_verify+0xc2/0xcb Feb 23 15:42:31 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Feb 23 15:42:31 localhost kernel: usbcore: registered new interface driver usbhid Feb 23 15:42:31 localhost kernel: usbhid: USB HID core driver Feb 23 15:42:31 localhost kernel: drop_monitor: Initializing network drop monitor service Feb 23 15:42:31 localhost kernel: Initializing XFRM netlink socket Feb 23 15:42:31 localhost kernel: NET: Registered protocol family 10 Feb 23 15:42:31 localhost kernel: Segment Routing with IPv6 Feb 23 15:42:31 localhost kernel: NET: Registered protocol family 17 Feb 23 15:42:31 localhost kernel: mpls_gso: MPLS GSO support Feb 23 15:42:31 localhost kernel: AVX2 version of gcm_enc/dec engaged. Feb 23 15:42:31 localhost kernel: AES CTR mode by8 optimization enabled Feb 23 15:42:31 localhost kernel: sched_clock: Marking stable (2182389759, 0)->(3620009706, -1437619947) Feb 23 15:42:31 localhost kernel: registered taskstats version 1 Feb 23 15:42:31 localhost kernel: Loading compiled-in X.509 certificates Feb 23 15:42:31 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: 89f84f8328e240c751c884441f2f1c1c17813dd9' Feb 23 15:42:31 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Feb 23 15:42:31 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Feb 23 15:42:31 localhost kernel: zswap: loaded using pool lzo/zbud Feb 23 15:42:31 localhost kernel: page_owner is disabled Feb 23 15:42:31 localhost kernel: Key type big_key registered Feb 23 15:42:31 localhost kernel: Key type encrypted registered Feb 23 15:42:31 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Feb 23 15:42:31 localhost kernel: ima: Allocated hash algorithm: sha256 Feb 23 15:42:31 localhost kernel: ima: No architecture policies found Feb 23 15:42:31 localhost kernel: evm: Initialising EVM extended attributes: Feb 23 15:42:31 localhost kernel: evm: security.selinux Feb 23 15:42:31 localhost kernel: evm: security.ima Feb 23 15:42:31 localhost kernel: evm: security.capability Feb 23 15:42:31 localhost kernel: evm: HMAC attrs: 0x1 Feb 23 15:42:31 localhost kernel: rtc_cmos 00:00: setting system clock to 2023-02-23 15:42:30 UTC (1677166950) Feb 23 15:42:31 localhost kernel: Freeing unused decrypted memory: 2036K Feb 23 15:42:31 localhost kernel: Freeing unused kernel image (initmem) memory: 2540K Feb 23 15:42:31 localhost kernel: Write protecting the kernel read-only data: 24576k Feb 23 15:42:31 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2012K Feb 23 15:42:31 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 1944K Feb 23 15:42:31 localhost systemd-journald[288]: Missed 1 kernel messages Feb 23 15:42:31 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input2 Feb 23 15:42:31 localhost systemd-journald[288]: Missed 9 kernel messages Feb 23 15:42:31 localhost kernel: fuse: init (API version 7.33) Feb 23 15:42:31 localhost kernel: Loading iSCSI transport class v2.0-870. Feb 23 15:42:31 localhost kernel: IPMI message handler: version 39.2 Feb 23 15:42:31 localhost kernel: ipmi device interface Feb 23 15:42:31 localhost systemd-journald[288]: Journal started Feb 23 15:42:31 localhost systemd-journald[288]: Runtime journal (/run/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 787.5M, 779.5M free. Feb 23 15:42:31 localhost systemd-modules-load[289]: Inserted module 'fuse' Feb 23 15:42:31 localhost systemd-modules-load[289]: Module 'msr' is builtin Feb 23 15:42:31 localhost systemd-modules-load[289]: Inserted module 'ipmi_devintf' Feb 23 15:42:31 localhost systemd[1]: systemd-vconsole-setup.service: Succeeded. Feb 23 15:42:31 localhost systemd[1]: Started Setup Virtual Console. Feb 23 15:42:31 localhost systemd[1]: memstrack.service: Succeeded. Feb 23 15:42:31 localhost systemd[1]: Started Load Kernel Modules. Feb 23 15:42:31 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 15:42:31 localhost systemd[1]: Starting dracut cmdline hook... Feb 23 15:42:31 localhost dracut-cmdline[320]: dracut-412.86.202302170236-0 dracut-049-203.git20220511.el8_6 Feb 23 15:42:31 localhost dracut-cmdline[320]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-17178804c159fc6199cab178bdafa30fd5fde653d57f312469b0b5e206a2a4f4/vmlinuz-4.18.0-372.43.1.el8_6.x86_64 ostree=/ostree/boot.0/rhcos/17178804c159fc6199cab178bdafa30fd5fde653d57f312469b0b5e206a2a4f4/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f Feb 23 15:42:31 localhost systemd[1]: Started Apply Kernel Variables. Feb 23 15:42:31 localhost systemd-journald[288]: Missed 11 kernel messages Feb 23 15:42:31 localhost kernel: iscsi: registered transport (tcp) Feb 23 15:42:31 localhost kernel: iscsi: registered transport (qla4xxx) Feb 23 15:42:31 localhost kernel: QLogic iSCSI HBA Driver Feb 23 15:42:31 localhost kernel: libcxgbi:libcxgbi_init_module: Chelsio iSCSI driver library libcxgbi v0.9.1-ko (Apr. 2015) Feb 23 15:42:31 localhost kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 23 15:42:31 localhost kernel: Chelsio T4-T6 iSCSI Driver cxgb4i v0.9.5-ko (Apr. 2015) Feb 23 15:42:31 localhost kernel: iscsi: registered transport (cxgb4i) Feb 23 15:42:31 localhost kernel: cnic: QLogic cnicDriver v2.5.22 (July 20, 2015) Feb 23 15:42:31 localhost kernel: QLogic NetXtreme II iSCSI Driver bnx2i v2.7.10.1 (Jul 16, 2014) Feb 23 15:42:31 localhost kernel: iscsi: registered transport (bnx2i) Feb 23 15:42:31 localhost kernel: iscsi: registered transport (be2iscsi) Feb 23 15:42:31 localhost kernel: In beiscsi_module_init, tt=0000000037a9ab87 Feb 23 15:42:31 localhost systemd[1]: Started dracut cmdline hook. Feb 23 15:42:31 localhost systemd[1]: Starting dracut pre-udev hook... Feb 23 15:42:31 localhost systemd-journald[288]: Missed 2 kernel messages Feb 23 15:42:31 localhost kernel: device-mapper: uevent: version 1.0.3 Feb 23 15:42:31 localhost kernel: device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised: dm-devel@redhat.com Feb 23 15:42:31 localhost systemd[1]: Started dracut pre-udev hook. Feb 23 15:42:31 localhost systemd[1]: Starting udev Kernel Device Manager... Feb 23 15:42:31 localhost systemd[1]: Started udev Kernel Device Manager. Feb 23 15:42:31 localhost systemd[1]: Starting dracut pre-trigger hook... Feb 23 15:42:31 localhost dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 23 15:42:31 localhost systemd[1]: Started dracut pre-trigger hook. Feb 23 15:42:31 localhost systemd[1]: Starting udev Coldplug all Devices... Feb 23 15:42:31 localhost systemd[1]: Mounting Kernel Configuration File System... Feb 23 15:42:31 localhost systemd[1]: Mounted Kernel Configuration File System. Feb 23 15:42:31 localhost systemd[1]: Started udev Coldplug all Devices. Feb 23 15:42:31 localhost systemd[1]: Starting udev Wait for Complete Device Initialization... Feb 23 15:42:31 localhost systemd-journald[288]: Missed 11 kernel messages Feb 23 15:42:31 localhost kernel: nvme nvme0: pci function 0000:00:04.0 Feb 23 15:42:31 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 23 15:42:31 localhost systemd-udevd[543]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:42:32 localhost kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 23 15:42:32 localhost systemd-journald[288]: Missed 1 kernel messages Feb 23 15:42:32 localhost kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 23 15:42:32 localhost kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 23 15:42:32 localhost kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 02:ea:92:f9:d3:f3 Feb 23 15:42:32 localhost kernel: nvme0n1: detected capacity change from 0 to 128849018880 Feb 23 15:42:32 localhost systemd-udevd[531]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:42:32 localhost systemd-journald[288]: Missed 1 kernel messages Feb 23 15:42:32 localhost kernel: ena 0000:00:05.0 ens5: renamed from eth0 Feb 23 15:42:32 localhost systemd-udevd[531]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:42:32 localhost systemd-journald[288]: Missed 1 kernel messages Feb 23 15:42:32 localhost kernel: nvme0n1: p1 p2 p3 p4 Feb 23 15:42:32 localhost systemd[1]: Found device Amazon Elastic Block Store root. Feb 23 15:42:32 localhost systemd-udevd[538]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:42:32 localhost systemd[1]: Found device Amazon Elastic Block Store root. Feb 23 15:42:32 localhost systemd[1]: Started udev Wait for Complete Device Initialization. Feb 23 15:42:32 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 23 15:42:32 localhost systemd[1]: Reached target Initrd Root Device. Feb 23 15:42:32 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 23 15:42:32 localhost systemd[1]: Starting Open-iSCSI... Feb 23 15:42:32 localhost systemd[1]: Reached target Local File Systems (Pre). Feb 23 15:42:32 localhost systemd[1]: Reached target Local File Systems. Feb 23 15:42:32 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 15:42:32 localhost multipathd[569]: --------start up-------- Feb 23 15:42:32 localhost multipathd[569]: read /etc/multipath.conf Feb 23 15:42:32 localhost multipathd[569]: /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 15:42:32 localhost multipathd[569]: You can run "/sbin/mpathconf --enable" to create Feb 23 15:42:32 localhost multipathd[569]: /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 15:42:32 localhost multipathd[569]: path checkers start up Feb 23 15:42:32 localhost multipathd[569]: /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 15:42:32 localhost multipathd[569]: You can run "/sbin/mpathconf --enable" to create Feb 23 15:42:32 localhost multipathd[569]: /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 15:42:32 localhost systemd[1]: Started Open-iSCSI. Feb 23 15:42:32 localhost iscsid[570]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 23 15:42:32 localhost iscsid[570]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 23 15:42:32 localhost iscsid[570]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 23 15:42:32 localhost iscsid[570]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 23 15:42:32 localhost iscsid[570]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 23 15:42:32 localhost systemd[1]: Started Create Volatile Files and Directories. Feb 23 15:42:32 localhost systemd[1]: Reached target System Initialization. Feb 23 15:42:32 localhost systemd[1]: Reached target Basic System. Feb 23 15:42:32 localhost systemd[1]: Starting dracut initqueue hook... Feb 23 15:42:32 localhost systemd[1]: Started dracut initqueue hook. Feb 23 15:42:32 localhost systemd[1]: Starting dracut pre-mount hook... Feb 23 15:42:32 localhost systemd[1]: Reached target Remote File Systems (Pre). Feb 23 15:42:32 localhost systemd-fsck[596]: /usr/sbin/fsck.xfs: XFS file system. Feb 23 15:42:32 localhost systemd[1]: Reached target Remote File Systems. Feb 23 15:42:32 localhost systemd[1]: Started dracut pre-mount hook. Feb 23 15:42:32 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/c83680a9-dcc4-4413-a0a5-4681b35c650a... Feb 23 15:42:32 localhost systemd[1]: Started File System Check on /dev/disk/by-uuid/c83680a9-dcc4-4413-a0a5-4681b35c650a. Feb 23 15:42:32 localhost systemd[1]: Mounting /sysroot... Feb 23 15:42:32 localhost systemd-journald[288]: Missed 39 kernel messages Feb 23 15:42:32 localhost kernel: SGI XFS with ACLs, security attributes, quota, no debug enabled Feb 23 15:42:32 localhost kernel: XFS (nvme0n1p4): Mounting V5 Filesystem Feb 23 15:42:32 localhost kernel: XFS (nvme0n1p4): Ending clean mount Feb 23 15:42:33 localhost systemd[1]: Mounted /sysroot. Feb 23 15:42:33 localhost systemd[1]: Starting OSTree Prepare OS/... Feb 23 15:42:33 localhost ostree-prepare-root[613]: preparing sysroot at /sysroot Feb 23 15:42:33 localhost ostree-prepare-root[613]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/d136fe99192ef059c08d720e32a39214137706f504d5ddcd2e9df310b9f3791e.0 Feb 23 15:42:33 localhost ostree-prepare-root[613]: filesystem at /sysroot currently writable: 1 Feb 23 15:42:33 localhost ostree-prepare-root[613]: sysroot.readonly configuration value: 1 Feb 23 15:42:33 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-d136fe99192ef059c08d720e32a39214137706f504d5ddcd2e9df310b9f3791e.0-etc.mount: Succeeded. Feb 23 15:42:33 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-d136fe99192ef059c08d720e32a39214137706f504d5ddcd2e9df310b9f3791e.0.mount: Succeeded. Feb 23 15:42:33 localhost systemd[1]: sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Feb 23 15:42:33 localhost systemd[1]: sysroot.tmp-usr.mount: Succeeded. Feb 23 15:42:33 localhost systemd[1]: sysroot.tmp-etc.mount: Succeeded. Feb 23 15:42:33 localhost systemd[1]: sysroot.tmp.mount: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Started OSTree Prepare OS/. Feb 23 15:42:33 localhost systemd[1]: Reached target Initrd Root File System. Feb 23 15:42:33 localhost systemd[1]: Starting Reload Configuration from the Real Root... Feb 23 15:42:33 localhost systemd[1]: Reloading. Feb 23 15:42:33 localhost multipathd[569]: exit (signal) Feb 23 15:42:33 localhost multipathd[569]: --------shut down------- Feb 23 15:42:33 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 23 15:42:33 localhost systemd[1]: multipathd.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 23 15:42:33 localhost systemd[1]: initrd-parse-etc.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Started Reload Configuration from the Real Root. Feb 23 15:42:33 localhost systemd[1]: Starting dracut mount hook... Feb 23 15:42:33 localhost systemd[1]: Reached target Initrd File Systems. Feb 23 15:42:33 localhost systemd[1]: Reached target Initrd Default Target. Feb 23 15:42:33 localhost systemd[1]: Started dracut mount hook. Feb 23 15:42:33 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 23 15:42:33 localhost dracut-pre-pivot[723]: Feb 23 15:42:33 | /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 15:42:33 localhost dracut-pre-pivot[723]: Feb 23 15:42:33 | You can run "/sbin/mpathconf --enable" to create Feb 23 15:42:33 localhost dracut-pre-pivot[723]: Feb 23 15:42:33 | /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 15:42:33 localhost systemd[1]: Started dracut pre-pivot and cleanup hook. Feb 23 15:42:33 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 23 15:42:33 localhost systemd[1]: dracut-pre-pivot.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 23 15:42:33 localhost systemd[1]: Stopped target Initrd Default Target. Feb 23 15:42:33 localhost systemd[1]: Stopped target Initrd Root Device. Feb 23 15:42:33 localhost systemd[1]: Stopped target Subsequent (Not Ignition) boot complete. Feb 23 15:42:33 localhost systemd[1]: coreos-touch-run-agetty.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Feb 23 15:42:33 localhost systemd[1]: Stopped target Remote File Systems. Feb 23 15:42:33 localhost systemd[1]: Stopped target Remote File Systems (Pre). Feb 23 15:42:33 localhost systemd[1]: clevis-luks-askpass.path: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Feb 23 15:42:33 localhost systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup. Feb 23 15:42:33 localhost systemd[1]: Stopped target Timers. Feb 23 15:42:33 localhost systemd[1]: Stopped target Basic System. Feb 23 15:42:33 localhost systemd[1]: Stopped target Slices. Feb 23 15:42:33 localhost systemd[1]: Stopped target System Initialization. Feb 23 15:42:33 localhost systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 15:42:33 localhost systemd[1]: Stopped target Local File Systems. Feb 23 15:42:33 localhost systemd[1]: Stopped target Local File Systems (Pre). Feb 23 15:42:33 localhost systemd[1]: systemd-udev-settle.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped udev Wait for Complete Device Initialization. Feb 23 15:42:33 localhost systemd[1]: Stopped target Sockets. Feb 23 15:42:33 localhost systemd[1]: Stopped target Paths. Feb 23 15:42:33 localhost systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 15:42:33 localhost systemd[1]: Stopped target Swap. Feb 23 15:42:33 localhost iscsid[570]: iscsid shutting down. Feb 23 15:42:33 localhost systemd[1]: dracut-mount.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped dracut mount hook. Feb 23 15:42:33 localhost systemd[1]: dracut-pre-mount.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 23 15:42:33 localhost systemd[1]: dracut-initqueue.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped dracut initqueue hook. Feb 23 15:42:33 localhost systemd[1]: Stopping Open-iSCSI... Feb 23 15:42:33 localhost systemd[1]: systemd-udev-trigger.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped udev Coldplug all Devices. Feb 23 15:42:33 localhost systemd[1]: dracut-pre-trigger.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 23 15:42:33 localhost systemd[1]: Stopping udev Kernel Device Manager... Feb 23 15:42:33 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 15:42:33 localhost systemd[1]: systemd-ask-password-console.path: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 15:42:33 localhost systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 15:42:33 localhost systemd[1]: iscsid.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped Open-iSCSI. Feb 23 15:42:33 localhost systemd[1]: Stopping iSCSI UserSpace I/O driver... Feb 23 15:42:33 localhost systemd[1]: iscsid.socket: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Closed Open-iSCSI iscsid Socket. Feb 23 15:42:33 localhost systemd[1]: iscsiuio.service: Succeeded. Feb 23 15:42:33 localhost systemd[1]: Stopped iSCSI UserSpace I/O driver. Feb 23 15:42:34 localhost systemd[1]: systemd-udevd.service: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Stopped udev Kernel Device Manager. Feb 23 15:42:34 localhost systemd[1]: initrd-cleanup.service: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Started Cleaning Up and Shutting Down Daemons. Feb 23 15:42:34 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 15:42:34 localhost systemd[1]: kmod-static-nodes.service: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Stopped Create list of required static device nodes for the current kernel. Feb 23 15:42:34 localhost systemd[1]: dracut-pre-udev.service: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 23 15:42:34 localhost systemd[1]: dracut-cmdline.service: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Stopped dracut cmdline hook. Feb 23 15:42:34 localhost systemd[1]: systemd-udevd-control.socket: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Closed udev Control Socket. Feb 23 15:42:34 localhost systemd[1]: systemd-udevd-kernel.socket: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Closed udev Kernel Socket. Feb 23 15:42:34 localhost systemd[1]: Starting Cleanup udevd DB... Feb 23 15:42:34 localhost systemd[1]: iscsiuio.socket: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Closed Open-iSCSI iscsiuio Socket. Feb 23 15:42:34 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded. Feb 23 15:42:34 localhost systemd[1]: Started Cleanup udevd DB. Feb 23 15:42:34 localhost systemd[1]: Reached target Switch Root. Feb 23 15:42:34 localhost systemd[1]: Starting Switch Root... Feb 23 15:42:34 localhost systemd[1]: Switching root. Feb 23 15:42:34 localhost systemd-journald[288]: Journal stopped Feb 23 15:42:35 localhost systemd[1]: Mounted /sysroot. Feb 23 15:42:35 localhost systemd[1]: Starting OSTree Prepare OS/... Feb 23 15:42:35 localhost ostree-prepare-root[613]: preparing sysroot at /sysroot Feb 23 15:42:35 localhost ostree-prepare-root[613]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/d136fe99192ef059c08d720e32a39214137706f504d5ddcd2e9df310b9f3791e.0 Feb 23 15:42:35 localhost ostree-prepare-root[613]: filesystem at /sysroot currently writable: 1 Feb 23 15:42:35 localhost ostree-prepare-root[613]: sysroot.readonly configuration value: 1 Feb 23 15:42:35 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-d136fe99192ef059c08d720e32a39214137706f504d5ddcd2e9df310b9f3791e.0-etc.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-d136fe99192ef059c08d720e32a39214137706f504d5ddcd2e9df310b9f3791e.0.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: sysroot.tmp-usr.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: sysroot.tmp-etc.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: sysroot.tmp.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Started OSTree Prepare OS/. Feb 23 15:42:35 localhost systemd[1]: Reached target Initrd Root File System. Feb 23 15:42:35 localhost systemd[1]: Starting Reload Configuration from the Real Root... Feb 23 15:42:35 localhost systemd[1]: Reloading. Feb 23 15:42:35 localhost multipathd[569]: exit (signal) Feb 23 15:42:35 localhost multipathd[569]: --------shut down------- Feb 23 15:42:35 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 23 15:42:35 localhost systemd[1]: multipathd.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 23 15:42:35 localhost systemd[1]: initrd-parse-etc.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Started Reload Configuration from the Real Root. Feb 23 15:42:35 localhost systemd[1]: Starting dracut mount hook... Feb 23 15:42:35 localhost systemd[1]: Reached target Initrd File Systems. Feb 23 15:42:35 localhost systemd[1]: Reached target Initrd Default Target. Feb 23 15:42:35 localhost systemd[1]: Started dracut mount hook. Feb 23 15:42:35 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 23 15:42:35 localhost dracut-pre-pivot[723]: Feb 23 15:42:33 | /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 15:42:35 localhost dracut-pre-pivot[723]: Feb 23 15:42:33 | You can run "/sbin/mpathconf --enable" to create Feb 23 15:42:35 localhost dracut-pre-pivot[723]: Feb 23 15:42:33 | /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 15:42:35 localhost systemd[1]: Started dracut pre-pivot and cleanup hook. Feb 23 15:42:35 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 23 15:42:35 localhost systemd[1]: dracut-pre-pivot.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 23 15:42:35 localhost systemd[1]: Stopped target Initrd Default Target. Feb 23 15:42:35 localhost systemd[1]: Stopped target Initrd Root Device. Feb 23 15:42:35 localhost systemd[1]: Stopped target Subsequent (Not Ignition) boot complete. Feb 23 15:42:35 localhost systemd[1]: coreos-touch-run-agetty.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Feb 23 15:42:35 localhost systemd[1]: Stopped target Remote File Systems. Feb 23 15:42:35 localhost systemd[1]: Stopped target Remote File Systems (Pre). Feb 23 15:42:35 localhost systemd[1]: clevis-luks-askpass.path: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Feb 23 15:42:35 localhost systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup. Feb 23 15:42:35 localhost systemd[1]: Stopped target Timers. Feb 23 15:42:35 localhost systemd[1]: Stopped target Basic System. Feb 23 15:42:35 localhost systemd[1]: Stopped target Slices. Feb 23 15:42:35 localhost systemd[1]: Stopped target System Initialization. Feb 23 15:42:35 localhost systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 15:42:35 localhost systemd[1]: Stopped target Local File Systems. Feb 23 15:42:35 localhost systemd[1]: Stopped target Local File Systems (Pre). Feb 23 15:42:35 localhost systemd[1]: systemd-udev-settle.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped udev Wait for Complete Device Initialization. Feb 23 15:42:35 localhost systemd[1]: Stopped target Sockets. Feb 23 15:42:35 localhost systemd[1]: Stopped target Paths. Feb 23 15:42:35 localhost systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 15:42:35 localhost systemd[1]: Stopped target Swap. Feb 23 15:42:35 localhost iscsid[570]: iscsid shutting down. Feb 23 15:42:35 localhost systemd[1]: dracut-mount.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped dracut mount hook. Feb 23 15:42:35 localhost systemd[1]: dracut-pre-mount.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 23 15:42:35 localhost systemd[1]: dracut-initqueue.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped dracut initqueue hook. Feb 23 15:42:35 localhost systemd[1]: Stopping Open-iSCSI... Feb 23 15:42:35 localhost systemd[1]: systemd-udev-trigger.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped udev Coldplug all Devices. Feb 23 15:42:35 localhost systemd[1]: dracut-pre-trigger.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 23 15:42:35 localhost systemd[1]: Stopping udev Kernel Device Manager... Feb 23 15:42:35 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 15:42:35 localhost systemd[1]: systemd-ask-password-console.path: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 15:42:35 localhost systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 15:42:35 localhost systemd[1]: iscsid.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Open-iSCSI. Feb 23 15:42:35 localhost systemd[1]: Stopping iSCSI UserSpace I/O driver... Feb 23 15:42:35 localhost systemd[1]: iscsid.socket: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Closed Open-iSCSI iscsid Socket. Feb 23 15:42:35 localhost systemd[1]: iscsiuio.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped iSCSI UserSpace I/O driver. Feb 23 15:42:35 localhost systemd[1]: systemd-udevd.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped udev Kernel Device Manager. Feb 23 15:42:35 localhost systemd[1]: initrd-cleanup.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Started Cleaning Up and Shutting Down Daemons. Feb 23 15:42:35 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 15:42:35 localhost systemd[1]: kmod-static-nodes.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Create list of required static device nodes for the current kernel. Feb 23 15:42:35 localhost systemd[1]: dracut-pre-udev.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 23 15:42:35 localhost systemd[1]: dracut-cmdline.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped dracut cmdline hook. Feb 23 15:42:35 localhost systemd[1]: systemd-udevd-control.socket: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Closed udev Control Socket. Feb 23 15:42:35 localhost systemd[1]: systemd-udevd-kernel.socket: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Closed udev Kernel Socket. Feb 23 15:42:35 localhost systemd[1]: Starting Cleanup udevd DB... Feb 23 15:42:35 localhost systemd[1]: iscsiuio.socket: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Closed Open-iSCSI iscsiuio Socket. Feb 23 15:42:35 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Started Cleanup udevd DB. Feb 23 15:42:35 localhost systemd[1]: Reached target Switch Root. Feb 23 15:42:35 localhost systemd[1]: Starting Switch Root... Feb 23 15:42:35 localhost systemd[1]: Switching root. Feb 23 15:42:35 localhost kernel: printk: systemd-journal: 1 output lines suppressed due to ratelimiting Feb 23 15:42:35 localhost kernel: printk: systemd: 26 output lines suppressed due to ratelimiting Feb 23 15:42:35 localhost kernel: audit: type=1404 audit(1677166954.442:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Feb 23 15:42:35 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 15:42:35 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 15:42:35 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 15:42:35 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 15:42:35 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 15:42:35 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 15:42:35 localhost kernel: audit: type=1403 audit(1677166954.602:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 23 15:42:35 localhost systemd[1]: Successfully loaded SELinux policy in 162.049ms. Feb 23 15:42:35 localhost systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 12.643ms. Feb 23 15:42:35 localhost systemd[1]: systemd 239 (239-58.el8_6.9) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy) Feb 23 15:42:35 localhost systemd[1]: Detected virtualization kvm. Feb 23 15:42:35 localhost systemd[1]: Detected architecture x86-64. Feb 23 15:42:35 localhost coreos-platform-chrony: Updated chrony to use aws configuration /run/coreos-platform-chrony.conf Feb 23 15:42:35 localhost systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 15:42:35 localhost systemd[1]: systemd-journald.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: systemd-journald.service: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: initrd-switch-root.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped Switch Root. Feb 23 15:42:35 localhost systemd[1]: initrd-switch-root.service: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: systemd-journald.service: Service has no hold-off time (RestartSec=0), scheduling restart. Feb 23 15:42:35 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 23 15:42:35 localhost systemd[1]: Stopped Journal Service. Feb 23 15:42:35 localhost systemd[1]: systemd-journald.service: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: Starting Journal Service... Feb 23 15:42:35 localhost systemd[1]: Started Forward Password Requests to Clevis Directory Watch. Feb 23 15:42:35 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 23 15:42:35 localhost systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 23 15:42:35 localhost systemd[1]: Reached target Local Encrypted Volumes (Pre). Feb 23 15:42:35 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Feb 23 15:42:35 localhost systemd[1]: Mounting POSIX Message Queue File System... Feb 23 15:42:35 localhost systemd-journald[793]: Journal started Feb 23 15:42:35 localhost systemd-journald[793]: Runtime journal (/run/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 787.5M, 779.5M free. Feb 23 15:42:35 localhost systemd[1]: Reached target Synchronize afterburn-sshkeys@.service template instances. Feb 23 15:42:35 localhost systemd[1]: Created slice User and Session Slice. Feb 23 15:42:35 localhost systemd[1]: Starting Create list of required static device nodes for the current kernel... Feb 23 15:42:35 localhost systemd[1]: Listening on udev Control Socket. Feb 23 15:42:35 localhost systemd[1]: Listening on udev Kernel Socket. Feb 23 15:42:35 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Feb 23 15:42:35 localhost systemd[1]: Mounting Huge Pages File System... Feb 23 15:42:35 localhost systemd[1]: Reached target Slices. Feb 23 15:42:35 localhost systemd[1]: Reached target Host and Network Name Lookups. Feb 23 15:42:35 localhost systemd[1]: Reached target Remote File Systems. Feb 23 15:42:35 localhost systemd[1]: Starting CoreOS: Set printk To Level 4 (warn)... Feb 23 15:42:35 localhost systemd[1]: Reached target Remote Encrypted Volumes. Feb 23 15:42:35 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 23 15:42:35 localhost systemd[1]: Stopped target Switch Root. Feb 23 15:42:35 localhost systemd[1]: Stopped target Initrd Root File System. Feb 23 15:42:35 localhost systemd[1]: Stopped target Initrd File Systems. Feb 23 15:42:35 localhost systemd[1]: Mounting Kernel Debug File System... Feb 23 15:42:35 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Feb 23 15:42:35 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 23 15:42:35 localhost systemd[1]: Listening on LVM2 poll daemon socket. Feb 23 15:42:35 localhost systemd[1]: Starting udev Coldplug all Devices... Feb 23 15:42:35 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 23 15:42:35 localhost systemd[1]: Created slice system-sshd\x2dkeygen.slice. Feb 23 15:42:35 localhost systemd[1]: Created slice system-getty.slice. Feb 23 15:42:35 localhost systemd[1]: Reached target Swap. Feb 23 15:42:35 localhost systemd[1]: Mounting Temporary Directory (/tmp)... Feb 23 15:42:35 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Feb 23 15:42:35 localhost systemd[1]: Reached target RPC Port Mapper. Feb 23 15:42:35 localhost systemd[1]: ostree-prepare-root.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped OSTree Prepare OS/. Feb 23 15:42:35 localhost systemd[1]: ostree-prepare-root.service: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Feb 23 15:42:35 localhost systemd[1]: systemd-fsck-root.service: Succeeded. Feb 23 15:42:35 localhost systemd[1]: Stopped File System Check on Root Device. Feb 23 15:42:35 localhost systemd[1]: systemd-fsck-root.service: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: Starting Rebuild Hardware Database... Feb 23 15:42:35 localhost systemd[1]: Starting Create System Users... Feb 23 15:42:35 localhost systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 23 15:42:35 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 15:42:35 localhost systemd[1]: sysroot-usr.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: sysroot-usr.mount: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: sysroot-etc.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: sysroot-etc.mount: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: sysroot-sysroot.mount: Succeeded. Feb 23 15:42:35 localhost systemd-modules-load[815]: Module 'msr' is builtin Feb 23 15:42:35 localhost systemd[1]: sysroot-sysroot.mount: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: sysroot-sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Feb 23 15:42:35 localhost systemd[1]: sysroot-sysroot-ostree-deploy-rhcos-var.mount: Consumed 0 CPU time Feb 23 15:42:35 localhost systemd[1]: Started Journal Service. Feb 23 15:42:35 localhost systemd-modules-load[815]: Inserted module 'ip_tables' Feb 23 15:42:35 localhost systemd[1]: Mounted POSIX Message Queue File System. Feb 23 15:42:35 localhost systemd[1]: Started Create list of required static device nodes for the current kernel. Feb 23 15:42:35 localhost systemd[1]: Mounted Huge Pages File System. Feb 23 15:42:35 localhost systemd[1]: Started CoreOS: Set printk To Level 4 (warn). Feb 23 15:42:35 localhost systemd[1]: Mounted Kernel Debug File System. Feb 23 15:42:35 localhost systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 23 15:42:35 localhost systemd[1]: Mounted Temporary Directory (/tmp). Feb 23 15:42:35 localhost systemd[1]: Started Create System Users. Feb 23 15:42:35 localhost systemd[1]: Started Load Kernel Modules. Feb 23 15:42:35 localhost systemd[1]: Mounting FUSE Control File System... Feb 23 15:42:35 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 15:42:35 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 23 15:42:35 localhost systemd[1]: Mounted FUSE Control File System. Feb 23 15:42:35 localhost systemd[1]: Started Apply Kernel Variables. Feb 23 15:42:35 localhost systemd[1]: Started udev Coldplug all Devices. Feb 23 15:42:35 localhost systemd[1]: Starting udev Wait for Complete Device Initialization... Feb 23 15:42:35 localhost systemd[1]: Started Create Static Device Nodes in /dev. Feb 23 15:42:35 localhost systemd[1]: Started Rebuild Hardware Database. Feb 23 15:42:35 localhost systemd[1]: Starting udev Kernel Device Manager... Feb 23 15:42:35 localhost systemd[1]: Started udev Kernel Device Manager. Feb 23 15:42:35 localhost systemd-udevd[834]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:42:35 localhost systemd-udevd[834]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:42:35 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input5 Feb 23 15:42:35 localhost kernel: parport_pc 00:03: reported by Plug and Play ACPI Feb 23 15:42:35 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 23 15:42:35 localhost systemd-udevd[834]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:42:36 localhost kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 23 15:42:36 localhost kernel: ppdev: user-space parallel port driver Feb 23 15:42:36 localhost systemd[1]: Started udev Wait for Complete Device Initialization. Feb 23 15:42:36 localhost systemd[1]: Reached target Local File Systems (Pre). Feb 23 15:42:36 localhost systemd[1]: var.mount: Directory /var to mount over is not empty, mounting anyway. Feb 23 15:42:36 localhost systemd[1]: Mounting /var... Feb 23 15:42:36 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f... Feb 23 15:42:36 localhost systemd[1]: Mounted /var. Feb 23 15:42:36 localhost systemd[1]: Starting OSTree Remount OS/ Bind Mounts... Feb 23 15:42:36 localhost systemd[1]: Started OSTree Remount OS/ Bind Mounts. Feb 23 15:42:36 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Feb 23 15:42:36 localhost systemd[1]: Starting Load/Save Random Seed... Feb 23 15:42:36 localhost systemd-journald[793]: Time spent on flushing to /var is 71.216ms for 936 entries. Feb 23 15:42:36 localhost systemd-journald[793]: System journal (/var/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 4.0G, 3.9G free. Feb 23 15:42:36 localhost kernel: EXT4-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: (null) Feb 23 15:42:36 localhost systemd-fsck[870]: boot: clean, 329/98304 files, 240836/393216 blocks Feb 23 15:42:36 localhost systemd[1]: Started Load/Save Random Seed. Feb 23 15:42:36 localhost systemd[1]: Started File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f. Feb 23 15:42:36 localhost systemd[1]: Mounting CoreOS Dynamic Mount for /boot... Feb 23 15:42:36 localhost systemd[1]: Started Flush Journal to Persistent Storage. Feb 23 15:42:36 localhost systemd[1]: Mounted CoreOS Dynamic Mount for /boot. Feb 23 15:42:36 localhost systemd[1]: Reached target Local File Systems. Feb 23 15:42:36 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Feb 23 15:42:36 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 15:42:36 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Feb 23 15:42:36 localhost systemd[1]: Starting Rebuild Journal Catalog... Feb 23 15:42:36 localhost systemd[1]: Starting Run update-ca-trust... Feb 23 15:42:36 localhost systemd[1]: Started Restore /run/initramfs on shutdown. Feb 23 15:42:36 localhost systemd-tmpfiles[885]: [/usr/lib/tmpfiles.d/pkg-dbus-daemon.conf:1] Duplicate line for path "/var/lib/dbus", ignoring. Feb 23 15:42:36 localhost systemd[1]: Started Rebuild Journal Catalog. Feb 23 15:42:36 localhost systemd-tmpfiles[885]: [/usr/lib/tmpfiles.d/tmp.conf:12] Duplicate line for path "/var/tmp", ignoring. Feb 23 15:42:36 localhost systemd-tmpfiles[885]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Feb 23 15:42:36 localhost systemd-tmpfiles[885]: [/usr/lib/tmpfiles.d/var.conf:19] Duplicate line for path "/var/cache", ignoring. Feb 23 15:42:36 localhost systemd-tmpfiles[885]: [/usr/lib/tmpfiles.d/var.conf:21] Duplicate line for path "/var/lib", ignoring. Feb 23 15:42:36 localhost systemd-tmpfiles[885]: [/usr/lib/tmpfiles.d/var.conf:23] Duplicate line for path "/var/spool", ignoring. Feb 23 15:42:36 localhost systemd-tmpfiles[885]: "/home" already exists and is not a directory. Feb 23 15:42:36 localhost systemd-tmpfiles[885]: "/srv" already exists and is not a directory. Feb 23 15:42:36 localhost systemd[1]: Started Create Volatile Files and Directories. Feb 23 15:42:36 localhost systemd[1]: Starting RHEL CoreOS Rebuild SELinux Policy If Necessary... Feb 23 15:42:36 localhost systemd[1]: Starting RHCOS Fix SELinux Labeling For /usr/local/sbin... Feb 23 15:42:36 localhost rhcos-rebuild-selinux-policy[894]: RHEL_VERSION=8.6Checking for policy recompilation Feb 23 15:42:36 localhost chcon[895]: changing security context of '/usr/local/sbin' Feb 23 15:42:36 localhost systemd[1]: Starting Security Auditing Service... Feb 23 15:42:36 localhost rhcos-rebuild-selinux-policy[898]: -rw-r--r--. 1 root root 8914149 Feb 23 15:42 /etc/selinux/targeted/policy/policy.31 Feb 23 15:42:36 localhost rhcos-rebuild-selinux-policy[898]: -rw-r--r--. 3 root root 8914149 Jan 1 1970 /usr/etc/selinux/targeted/policy/policy.31 Feb 23 15:42:36 localhost systemd[1]: Started RHCOS Fix SELinux Labeling For /usr/local/sbin. Feb 23 15:42:36 localhost auditd[903]: No plugins found, not dispatching events Feb 23 15:42:36 localhost auditd[903]: Init complete, auditd 3.0.7 listening for events (startup state enable) Feb 23 15:42:36 localhost augenrules[906]: /sbin/augenrules: No change Feb 23 15:42:36 localhost augenrules[917]: No rules Feb 23 15:42:36 localhost augenrules[917]: enabled 1 Feb 23 15:42:36 localhost augenrules[917]: failure 1 Feb 23 15:42:36 localhost augenrules[917]: pid 903 Feb 23 15:42:36 localhost augenrules[917]: rate_limit 0 Feb 23 15:42:36 localhost augenrules[917]: backlog_limit 8192 Feb 23 15:42:36 localhost augenrules[917]: lost 0 Feb 23 15:42:36 localhost augenrules[917]: backlog 0 Feb 23 15:42:36 localhost augenrules[917]: backlog_wait_time 60000 Feb 23 15:42:36 localhost augenrules[917]: backlog_wait_time_actual 0 Feb 23 15:42:36 localhost augenrules[917]: enabled 1 Feb 23 15:42:36 localhost augenrules[917]: failure 1 Feb 23 15:42:36 localhost augenrules[917]: pid 903 Feb 23 15:42:36 localhost augenrules[917]: rate_limit 0 Feb 23 15:42:36 localhost augenrules[917]: backlog_limit 8192 Feb 23 15:42:36 localhost augenrules[917]: lost 0 Feb 23 15:42:36 localhost augenrules[917]: backlog 0 Feb 23 15:42:36 localhost augenrules[917]: backlog_wait_time 60000 Feb 23 15:42:36 localhost augenrules[917]: backlog_wait_time_actual 0 Feb 23 15:42:36 localhost augenrules[917]: enabled 1 Feb 23 15:42:36 localhost augenrules[917]: failure 1 Feb 23 15:42:36 localhost augenrules[917]: pid 903 Feb 23 15:42:36 localhost augenrules[917]: rate_limit 0 Feb 23 15:42:36 localhost augenrules[917]: backlog_limit 8192 Feb 23 15:42:36 localhost augenrules[917]: lost 0 Feb 23 15:42:36 localhost augenrules[917]: backlog 0 Feb 23 15:42:36 localhost augenrules[917]: backlog_wait_time 60000 Feb 23 15:42:36 localhost augenrules[917]: backlog_wait_time_actual 0 Feb 23 15:42:36 localhost systemd[1]: Started RHEL CoreOS Rebuild SELinux Policy If Necessary. Feb 23 15:42:36 localhost systemd[1]: Started Security Auditing Service. Feb 23 15:42:36 localhost systemd[1]: Starting Update UTMP about System Boot/Shutdown... Feb 23 15:42:36 localhost systemd[1]: Started Update UTMP about System Boot/Shutdown. Feb 23 15:42:36 localhost systemd[1]: Started Run update-ca-trust. Feb 23 15:42:36 localhost systemd[1]: Started Rebuild Dynamic Linker Cache. Feb 23 15:42:36 localhost systemd[1]: Starting Update is Completed... Feb 23 15:42:36 localhost systemd[1]: Started Update is Completed. Feb 23 15:42:36 localhost systemd[1]: Reached target System Initialization. Feb 23 15:42:36 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Feb 23 15:42:36 localhost systemd[1]: Listening on bootupd.socket. Feb 23 15:42:36 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Feb 23 15:42:36 localhost systemd[1]: Started Monitor console-login-helper-messages runtime issue snippets directory for changes. Feb 23 15:42:36 localhost systemd[1]: Started Daily rotation of log files. Feb 23 15:42:36 localhost systemd[1]: Reached target Timers. Feb 23 15:42:36 localhost systemd[1]: Started OSTree Monitor Staged Deployment. Feb 23 15:42:36 localhost systemd[1]: Reached target Paths. Feb 23 15:42:36 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 23 15:42:36 localhost systemd[1]: Reached target Sockets. Feb 23 15:42:36 localhost systemd[1]: Reached target Basic System. Feb 23 15:42:36 localhost systemd[1]: Reached target Network (Pre). Feb 23 15:42:36 localhost systemd[1]: Reached target sshd-keygen.target. Feb 23 15:42:36 localhost systemd[1]: Starting NTP client/server... Feb 23 15:42:36 localhost systemd[1]: Starting Generate SSH keys snippet for display via console-login-helper-messages... Feb 23 15:42:36 localhost systemd[1]: Starting System Security Services Daemon... Feb 23 15:42:36 localhost systemd[1]: Starting Generate console-login-helper-messages issue snippet... Feb 23 15:42:36 localhost systemd[1]: Starting CRI-O Auto Update Script... Feb 23 15:42:36 localhost systemd[1]: Started irqbalance daemon. Feb 23 15:42:36 localhost systemd[1]: Starting Open vSwitch Database Unit... Feb 23 15:42:36 localhost systemd[1]: Starting Afterburn (Metadata)... Feb 23 15:42:36 localhost systemd[1]: Starting Create Ignition Status Issue Files... Feb 23 15:42:36 localhost systemd[1]: Starting Generation of shadow ID ranges for CRI-O... Feb 23 15:42:36 localhost systemd[1]: Started D-Bus System Message Bus. Feb 23 15:42:36 localhost chronyd[960]: chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 23 15:42:36 localhost chronyd[960]: Frequency 0.103 +/- 9.856 ppm read from /var/lib/chrony/drift Feb 23 15:42:36 localhost chown[965]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Feb 23 15:42:37 localhost systemd[1]: Started Generate SSH keys snippet for display via console-login-helper-messages. Feb 23 15:42:37 localhost afterburn[949]: Feb 23 15:42:37.053 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 23 15:42:37 localhost systemd[1]: crio-subid.service: Succeeded. Feb 23 15:42:37 localhost systemd[1]: Started Generation of shadow ID ranges for CRI-O. Feb 23 15:42:37 localhost systemd[1]: crio-subid.service: Consumed 17ms CPU time Feb 23 15:42:37 localhost systemd[1]: Started NTP client/server. Feb 23 15:42:37 localhost sssd[938]: Starting up Feb 23 15:42:37 localhost sssd_be[1025]: Starting up Feb 23 15:42:37 localhost systemd[1]: Started Create Ignition Status Issue Files. Feb 23 15:42:37 localhost sssd_nss[1037]: Starting up Feb 23 15:42:37 localhost systemd[1]: Started System Security Services Daemon. Feb 23 15:42:37 localhost systemd[1]: Reached target User and Group Name Lookups. Feb 23 15:42:37 localhost systemd[1]: Starting Login Service... Feb 23 15:42:37 localhost systemd-logind[1051]: Watching system buttons on /dev/input/event0 (Power Button) Feb 23 15:42:37 localhost systemd-logind[1051]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 23 15:42:37 localhost systemd-logind[1051]: Watching system buttons on /dev/input/event2 (AT Translated Set 2 keyboard) Feb 23 15:42:37 localhost systemd-logind[1051]: New seat seat0. Feb 23 15:42:37 localhost systemd[1]: Started Login Service. Feb 23 15:42:37 localhost ovs-ctl[990]: Starting ovsdb-server. Feb 23 15:42:37 localhost ovs-vsctl[1063]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.3.0 Feb 23 15:42:37 localhost ovs-vsctl[1068]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.17.6 "external-ids:system-id=\"4004906b-6ca5-4a32-b3c0-bdcf1c128aba\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhcos\"" "system-version=\"4.12\"" Feb 23 15:42:37 localhost ovs-ctl[990]: Configuring Open vSwitch system IDs. Feb 23 15:42:37 localhost ovs-vsctl[1074]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=localhost Feb 23 15:42:37 localhost ovs-ctl[990]: Enabling remote OVSDB managers. Feb 23 15:42:37 localhost systemd[1]: Started Open vSwitch Database Unit. Feb 23 15:42:37 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Feb 23 15:42:37 localhost systemd[1]: Started Open vSwitch Delete Transient Ports. Feb 23 15:42:37 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Feb 23 15:42:37 localhost kernel: openvswitch: Open vSwitch switching datapath Feb 23 15:42:37 localhost ovs-ctl[1122]: Inserting openvswitch module. Feb 23 15:42:37 localhost crio[940]: time="2023-02-23 15:42:37.549723688Z" level=info msg="Starting CRI-O, version: 1.25.2-6.rhaos4.12.git3c4e50c.el8, git: unknown(clean)" Feb 23 15:42:37 localhost ovs-ctl[1094]: Starting ovs-vswitchd. Feb 23 15:42:37 localhost ovs-vsctl[1144]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=localhost Feb 23 15:42:37 localhost ovs-ctl[1094]: Enabling remote OVSDB managers. Feb 23 15:42:37 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Feb 23 15:42:37 localhost systemd[1]: Starting Open vSwitch... Feb 23 15:42:37 localhost systemd[1]: Started Open vSwitch. Feb 23 15:42:37 localhost systemd[1]: Starting Network Manager... Feb 23 15:42:37 localhost systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1584901208-merged.mount: Succeeded. Feb 23 15:42:37 localhost NetworkManager[1149]: [1677166957.7401] NetworkManager (version 1.36.0-12.el8_6) is starting... (for the first time) Feb 23 15:42:37 localhost NetworkManager[1149]: [1677166957.7404] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-disable-default-plugins.conf, 20-client-id-from-mac.conf) (etc: 20-keyfiles.conf, sdn.conf) Feb 23 15:42:37 localhost systemd[1]: Started Network Manager. Feb 23 15:42:37 localhost NetworkManager[1149]: [1677166957.7467] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 23 15:42:37 localhost systemd[1]: Reached target Network. Feb 23 15:42:37 localhost systemd[1]: Starting OpenSSH server daemon... Feb 23 15:42:37 localhost systemd[1]: Starting Network Manager Wait Online... Feb 23 15:42:37 localhost crio[940]: time="2023-02-23 15:42:37.758935418Z" level=info msg="Checking whether cri-o should wipe containers: open /var/run/crio/version: no such file or directory" Feb 23 15:42:37 localhost crio[940]: time="2023-02-23 15:42:37.758973987Z" level=info msg="open /var/lib/crio/version: no such file or directory: triggering wipe of images" Feb 23 15:42:37 localhost NetworkManager[1149]: [1677166957.7597] manager[0x5589a3c8c000]: monitoring kernel firmware directory '/lib/firmware'. Feb 23 15:42:37 localhost systemd[1]: crio-wipe.service: Succeeded. Feb 23 15:42:37 localhost systemd[1]: Started CRI-O Auto Update Script. Feb 23 15:42:37 localhost systemd[1]: crio-wipe.service: Consumed 92ms CPU time Feb 23 15:42:37 localhost dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 15:42:37 localhost systemd[1]: Starting Hostname Service... Feb 23 15:42:37 localhost sshd[1152]: Server listening on 0.0.0.0 port 22. Feb 23 15:42:37 localhost sshd[1152]: Server listening on :: port 22. Feb 23 15:42:37 localhost systemd[1]: Started OpenSSH server daemon. Feb 23 15:42:37 localhost dbus-daemon[958]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 23 15:42:37 localhost systemd[1]: Started Hostname Service. Feb 23 15:42:37 localhost NetworkManager[1149]: [1677166957.8520] hostname: hostname: using hostnamed Feb 23 15:42:37 localhost NetworkManager[1149]: [1677166957.8525] dns-mgr[0x5589a3c69250]: init: dns=default,systemd-resolved rc-manager=symlink Feb 23 15:42:37 localhost NetworkManager[1149]: [1677166957.8526] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Feb 23 15:42:37 localhost.localdomain systemd-hostnamed[1159]: Changed host name to 'localhost.localdomain' Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8619] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-device-plugin-ovs.so) Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8642] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-device-plugin-team.so) Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8643] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8643] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8643] manager: Networking is enabled by state file Feb 23 15:42:37 localhost.localdomain dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8651] settings: Loaded settings plugin: keyfile (internal) Feb 23 15:42:37 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8693] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-settings-plugin-ifcfg-rh.so") Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8731] dhcp-init: Using DHCP client 'internal' Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8732] device (lo): carrier: link connected Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8734] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8740] manager: (ens5): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Feb 23 15:42:37 localhost.localdomain dbus-daemon[958]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Feb 23 15:42:37 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8801] settings: (ens5): created default wired connection 'Wired connection 1' Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8801] device (ens5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 15:42:37 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): ens5: link is not ready Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8830] device (ens5): carrier: link connected Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8859] device (ens5): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8862] policy: auto-activating connection 'Wired connection 1' (eb99b8bd-8e1f-3f41-845b-962703e428f7) Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8865] device (ens5): Activation: starting connection 'Wired connection 1' (eb99b8bd-8e1f-3f41-845b-962703e428f7) Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8866] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8867] manager: NetworkManager state is now CONNECTING Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8867] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8871] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8881] dhcp4 (ens5): activation: beginning transaction (timeout in 45 seconds) Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8916] dhcp4 (ens5): state changed new lease, address=10.0.136.68 Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8918] policy: set 'Wired connection 1' (ens5) as default for IPv4 routing and DNS Feb 23 15:42:37 localhost.localdomain NetworkManager[1149]: [1677166957.8920] policy: set-hostname: set hostname to 'ip-10-0-136-68' (from DHCPv4) Feb 23 15:42:37 ip-10-0-136-68 systemd-hostnamed[1159]: Changed host name to 'ip-10-0-136-68' Feb 23 15:42:37 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.resolve1' unit='dbus-org.freedesktop.resolve1.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 15:42:37 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found. Feb 23 15:42:37 ip-10-0-136-68 NetworkManager[1149]: [1677166957.8985] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1174]: Error: Device '' not found. Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1188]: Error: Device '' not found. Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1199]: Error: Device '' not found. Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + INTERFACE_NAME=ens5 Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + OPERATION=pre-up Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + '[' pre-up '!=' pre-up ']' Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1218]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1219]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + INTERFACE_CONNECTION_UUID=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + '[' eb99b8bd-8e1f-3f41-845b-962703e428f7 == '' ']' Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1227]: ++ awk -F : '{print $NF}' Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1226]: ++ nmcli -t -f connection.slave-type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + INTERFACE_OVS_SLAVE_TYPE= Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + '[' '' '!=' ovs-port ']' Feb 23 15:42:37 ip-10-0-136-68 nm-dispatcher[1216]: + exit 0 Feb 23 15:42:37 ip-10-0-136-68 NetworkManager[1149]: [1677166957.9891] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:37 ip-10-0-136-68 NetworkManager[1149]: [1677166957.9893] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:37 ip-10-0-136-68 NetworkManager[1149]: [1677166957.9895] manager: NetworkManager state is now CONNECTED_SITE Feb 23 15:42:37 ip-10-0-136-68 NetworkManager[1149]: [1677166957.9897] device (ens5): Activation: successful, device activated. Feb 23 15:42:37 ip-10-0-136-68 NetworkManager[1149]: [1677166957.9900] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 15:42:37 ip-10-0-136-68 NetworkManager[1149]: [1677166957.9904] manager: startup complete Feb 23 15:42:37 ip-10-0-136-68 systemd[1]: Started Network Manager Wait Online. Feb 23 15:42:37 ip-10-0-136-68 systemd[1]: Starting Configures OVS with proper host networking configuration... Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + touch /var/run/ovs-config-executed Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Started Generate console-login-helper-messages issue snippet. Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Consumed 14ms CPU time Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + NM_CONN_ETC_PATH=/etc/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + NM_CONN_RUN_PATH=/run/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + NM_CONN_CONF_PATH=/etc/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + NM_CONN_SET_PATH=/run/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + nm_config_changed=0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_SUFFIX=-slave-ovs-clone Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + BRIDGE_METRIC=48 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + BRIDGE1_METRIC=49 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + trap handle_exit EXIT Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' /run/NetworkManager/system-connections '!=' /etc/NetworkManager/system-connections ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' /run/NetworkManager/system-connections '!=' /run/NetworkManager/system-connections ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /etc/cno/mtu-migration/config ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Cleaning up left over mtu migration configuration' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: Cleaning up left over mtu migration configuration Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + rm -rf /etc/cno/mtu-migration Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Starting Permit User Sessions... Feb 23 15:42:38 ip-10-0-136-68 nm-dispatcher[1256]: Error: Device '' not found. Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1252]: + grep -q openvswitch Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Started Permit User Sessions. Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.064 INFO Putting http://169.254.169.254/latest/api/token: Attempt #2 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1251]: + rpm -qa Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Started Getty on tty1. Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Started Serial Getty on ttyS0. Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Reached target Login Prompts. Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Starting Generate console-login-helper-messages issue snippet... Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.072 INFO Fetch successful Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.072 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.073 INFO Fetch successful Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.073 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.073 INFO Fetch successful Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.073 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.074 INFO Fetch failed with 404: resource not found Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.074 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.075 INFO Fetch failed with 404: resource not found Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.075 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.075 INFO Fetch successful Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.075 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.076 INFO Fetch successful Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.076 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.076 INFO Fetch failed with 404: resource not found Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.076 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 23 15:42:38 ip-10-0-136-68 afterburn[949]: Feb 23 15:42:38.077 INFO Fetch successful Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Started Afterburn (Metadata). Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Starting Fetch kubelet node name from AWS Metadata... Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Starting Fetch kubelet provider id from AWS Metadata... Feb 23 15:42:38 ip-10-0-136-68 aws-kubelet-providerid[1289]: Not replacing existing /etc/systemd/system/kubelet.service.d/20-aws-providerid.conf Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: aws-kubelet-providerid.service: Succeeded. Feb 23 15:42:38 ip-10-0-136-68 aws-kubelet-nodename[1288]: Not replacing existing /etc/systemd/system/kubelet.service.d/20-aws-node-name.conf Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Started Fetch kubelet provider id from AWS Metadata. Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: aws-kubelet-providerid.service: Consumed 1ms CPU time Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: aws-kubelet-nodename.service: Succeeded. Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: Started Fetch kubelet node name from AWS Metadata. Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: aws-kubelet-nodename.service: Consumed 1ms CPU time Feb 23 15:42:38 ip-10-0-136-68 nm-dispatcher[1299]: Error: Device '' not found. Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + print_state Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Current device, connection, interface and routing state:' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: Current device, connection, interface and routing state: Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1304]: + grep -v unmanaged Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1303]: + nmcli -g all device Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1304]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/2:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli -g all connection Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1308]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677166957:Thu Feb 23 15\:42\:37 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/1:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ip -d address show Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: inet 127.0.0.1/8 scope host lo Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: valid_lft forever preferred_lft forever Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: inet6 ::1/128 scope host Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: valid_lft forever preferred_lft forever Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: valid_lft 3600sec preferred_lft 3600sec Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1312]: valid_lft forever preferred_lft forever Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ip route show Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1313]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1313]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ip -6 route show Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1314]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1314]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' OVNKubernetes == OVNKubernetes ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovnk_config_dir=/etc/ovnk Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovnk_var_dir=/var/lib/ovnk Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + extra_bridge_file=/etc/ovnk/extra_bridge Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + mkdir -p /etc/ovnk Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + mkdir -p /var/lib/ovnk Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1317]: ++ get_iface_default_hint /var/lib/ovnk/iface_default_hint Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1317]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1317]: ++ '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1317]: ++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + iface_default_hint= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' == '' ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1318]: ++ get_bridge_physical_interface ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1318]: ++ local bridge_interface=ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1318]: ++ local physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1319]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1319]: +++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1318]: ++ physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1318]: ++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + current_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' '!=' '' ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /run/configure-ovs-boot-done ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Running on boot, restoring previous configuration before proceeding...' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: Running on boot, restoring previous configuration before proceeding... Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + rollback_nm Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1324]: ++ get_bridge_physical_interface ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1324]: ++ local bridge_interface=ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1324]: ++ local physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1325]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1325]: +++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1324]: ++ physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1324]: ++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + phys0= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1330]: ++ get_bridge_physical_interface ovs-if-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1330]: ++ local bridge_interface=ovs-if-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1330]: ++ local physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1331]: +++ nmcli -g connection.interface-name conn show ovs-if-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1331]: +++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1330]: ++ physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1330]: ++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + phys1= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + remove_all_ovn_bridges Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Reverting any previous OVS configuration' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: Reverting any previous OVS configuration Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + remove_ovn_bridges br-ex phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_conf_files br-ex phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/etc/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1336]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_set_files br-ex phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/run/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1337]: ++ echo /run/NetworkManager/system-connections/br-ex /run/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0 /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0 /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/br-ex ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 15:42:38 ip-10-0-136-68 ovs-vsctl[1338]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + remove_ovn_bridges br-ex1 phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/etc/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1339]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_set_files br-ex1 phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex1 phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/run/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1340]: ++ echo /run/NetworkManager/system-connections/br-ex1 /run/NetworkManager/system-connections/br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex1 /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex1 /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-phys1 /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection /run/NetworkManager/system-connections/ovs-port-phys1 /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck3936593421-merged.mount: Succeeded. Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/br-ex1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 15:42:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck3936593421-merged.mount: Consumed 0 CPU time Feb 23 15:42:38 ip-10-0-136-68 ovs-vsctl[1341]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'OVS configuration successfully reverted' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: OVS configuration successfully reverted Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + reload_profiles_nm '' '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 0 -eq 0 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + return Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + print_state Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Current device, connection, interface and routing state:' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: Current device, connection, interface and routing state: Feb 23 15:42:38 ip-10-0-136-68 ovs-vsctl[1382]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1342]: + nmcli -g all device Feb 23 15:42:38 ip-10-0-136-68 ovs-vsctl[1385]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1343]: + grep -v unmanaged Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1343]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/2:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli -g all connection Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1347]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677166957:Thu Feb 23 15\:42\:37 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/1:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ip -d address show Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: inet 127.0.0.1/8 scope host lo Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: valid_lft forever preferred_lft forever Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: inet6 ::1/128 scope host Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: valid_lft forever preferred_lft forever Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: valid_lft 3600sec preferred_lft 3600sec Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1351]: valid_lft forever preferred_lft forever Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ip route show Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1352]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1352]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ip -6 route show Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1353]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1353]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + touch /run/configure-ovs-boot-done Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ get_nodeip_interface /var/lib/ovnk/iface_default_hint /etc/ovnk/extra_bridge /run/nodeip-configuration/primary-ip Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ local iface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ local counter=0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ local extra_bridge_file=/etc/ovnk/extra_bridge Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ local extra_bridge= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ '[' -f /etc/ovnk/extra_bridge ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1356]: +++ get_nodeip_hint_interface /run/nodeip-configuration/primary-ip '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1356]: +++ local ip_hint= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1356]: +++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1356]: +++ local extra_bridge= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1356]: +++ local iface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1357]: ++++ get_ip_from_ip_hint_file /run/nodeip-configuration/primary-ip Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1357]: ++++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1357]: ++++ [[ ! -f /run/nodeip-configuration/primary-ip ]] Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1357]: ++++ return Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1356]: +++ ip_hint= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1356]: +++ [[ -z '' ]] Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1356]: +++ return Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ iface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ [[ -n '' ]] Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ '[' 0 -lt 12 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ '[' '' '!=' '' ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1359]: +++ ip route show default Feb 23 15:42:38 ip-10-0-136-68 ovs-vsctl[1412]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1360]: +++ grep -v br-ex1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1361]: +++ awk '{ if ($4 == "dev") { print $5; exit } }' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ iface=ens5 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ [[ -n ens5 ]] Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ break Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ '[' ens5 '!=' br-ex ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ '[' ens5 '!=' br-ex1 ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1362]: +++ get_iface_default_hint /var/lib/ovnk/iface_default_hint Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1362]: +++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1362]: +++ '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1362]: +++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ iface_default_hint= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ '[' '' '!=' '' ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ '[' ens5 '!=' '' ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ write_iface_default_hint /var/lib/ovnk/iface_default_hint ens5 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ local iface=ens5 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ echo ens5 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1355]: ++ echo ens5 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + iface=ens5 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ens5 '!=' br-ex ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1363]: ++ nmcli connection show --active br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -z '' ']' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Bridge br-ex is not active, restoring previous configuration before proceeding...' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: Bridge br-ex is not active, restoring previous configuration before proceeding... Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + rollback_nm Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1368]: ++ get_bridge_physical_interface ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1368]: ++ local bridge_interface=ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1368]: ++ local physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1369]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1369]: +++ echo '' Feb 23 15:42:38 ip-10-0-136-68 NetworkManager[1149]: [1677166958.9649] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/3) Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1368]: ++ physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1368]: ++ echo '' Feb 23 15:42:38 ip-10-0-136-68 NetworkManager[1149]: [1677166958.9651] audit: op="connection-add" uuid="6d3cf1fd-1c82-447c-b0c3-a48401391b29" name="br-ex" pid=1413 uid=0 result="success" Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + phys0= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1374]: ++ get_bridge_physical_interface ovs-if-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1374]: ++ local bridge_interface=ovs-if-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1374]: ++ local physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1375]: +++ nmcli -g connection.interface-name conn show ovs-if-phys1 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1375]: +++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1374]: ++ physical_interface= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1374]: ++ echo '' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + phys1= Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + remove_all_ovn_bridges Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Reverting any previous OVS configuration' Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: Reverting any previous OVS configuration Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + remove_ovn_bridges br-ex phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_conf_files br-ex phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/etc/NetworkManager/system-connections Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys0 Feb 23 15:42:38 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:39 ip-10-0-136-68 ovs-vsctl[1421]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-port br-ex ens5 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.0305] manager: (ens5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/4) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1380]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.0306] audit: op="connection-add" uuid="4152ced3-7759-4837-844f-8e1195509a74" name="ovs-port-phys0" pid=1422 uid=0 result="success" Feb 23 15:42:39 ip-10-0-136-68 ovs-vsctl[1430]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-port br-ex br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_set_files br-ex phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/run/NetworkManager/system-connections Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1381]: ++ echo /run/NetworkManager/system-connections/br-ex /run/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0 /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0 /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.0622] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/br-ex ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + remove_ovn_bridges br-ex1 phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/etc/NetworkManager/system-connections Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.0624] audit: op="connection-add" uuid="f435d5b6-43a2-4866-9289-a2d3cf775ea9" name="ovs-port-br-ex" pid=1431 uid=0 result="success" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1383]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 15:42:39 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_set_files br-ex1 phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex1 phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/run/NetworkManager/system-connections Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:39 ip-10-0-136-68 systemd[1]: Started Generate console-login-helper-messages issue snippet. Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1384]: ++ echo /run/NetworkManager/system-connections/br-ex1 /run/NetworkManager/system-connections/br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex1 /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex1 /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-phys1 /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection /run/NetworkManager/system-connections/ovs-port-phys1 /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 15:42:39 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Consumed 13ms CPU time Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/br-ex1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'OVS configuration successfully reverted' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: OVS configuration successfully reverted Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + reload_profiles_nm '' '' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 0 -eq 0 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + return Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + print_state Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Current device, connection, interface and routing state:' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: Current device, connection, interface and routing state: Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1386]: + nmcli -g all device Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.3724] audit: op="connection-add" uuid="1fe530aa-6f97-4a74-9f19-bba3f65a0596" name="ovs-if-phys0" pid=1462 uid=0 result="success" Feb 23 15:42:39 ip-10-0-136-68 ovs-vsctl[1461]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists destroy interface ens5 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1387]: + grep -v unmanaged Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1387]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/2:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli -g all connection Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1391]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677166957:Thu Feb 23 15\:42\:37 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/1:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ip -d address show Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: inet 127.0.0.1/8 scope host lo Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: valid_lft forever preferred_lft forever Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: inet6 ::1/128 scope host Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: valid_lft forever preferred_lft forever Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: valid_lft 3600sec preferred_lft 3600sec Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1395]: valid_lft forever preferred_lft forever Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ip route show Feb 23 15:42:39 ip-10-0-136-68 ovs-vsctl[1526]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists destroy interface br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1396]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1396]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ip -6 route show Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1397]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1397]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + convert_to_bridge ens5 br-ex phys0 48 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local iface=ens5 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local bridge_name=br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local port_name=phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local bridge_metric=48 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local ovs_port=ovs-port-br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local ovs_interface=ovs-if-br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local default_port_name=ovs-port-phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local bridge_interface_name=ovs-if-phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ens5 = br-ex ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nm_config_changed=1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -z ens5 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + iface_mac=02:ea:92:f9:d3:f3 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'MAC address found for iface: ens5: 02:ea:92:f9:d3:f3' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: MAC address found for iface: ens5: 02:ea:92:f9:d3:f3 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1400]: ++ ip link show ens5 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1401]: ++ awk '{print $5; exit}' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + iface_mtu=9001 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ -z 9001 ]] Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'MTU found for iface: ens5: 9001' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: MTU found for iface: ens5: 9001 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1403]: ++ nmcli --fields UUID,DEVICE conn show --active Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1404]: ++ awk '/\sens5\s*$/ {print $1}' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + old_conn=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ -z eb99b8bd-8e1f-3f41-845b-962703e428f7 ]] Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli connection show br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + add_nm_conn type ovs-bridge con-name br-ex conn.interface br-ex 802-3-ethernet.mtu 9001 connection.autoconnect-slaves 1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli c add type ovs-bridge con-name br-ex conn.interface br-ex 802-3-ethernet.mtu 9001 connection.autoconnect-slaves 1 connection.autoconnect no Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1413]: Connection 'br-ex' (6d3cf1fd-1c82-447c-b0c3-a48401391b29) successfully added. Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.6919] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli connection show ovs-port-phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists del-port br-ex ens5 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + add_nm_conn type ovs-port conn.interface ens5 master br-ex con-name ovs-port-phys0 connection.autoconnect-slaves 1 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli c add type ovs-port conn.interface ens5 master br-ex con-name ovs-port-phys0 connection.autoconnect-slaves 1 connection.autoconnect no Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.6920] audit: op="connection-add" uuid="1819ac43-68a9-4976-bc28-41abb9d46380" name="ovs-if-br-ex" pid=1550 uid=0 result="success" Feb 23 15:42:39 ip-10-0-136-68 ovs-vsctl[1567]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1422]: Connection 'ovs-port-phys0' (4152ced3-7759-4837-844f-8e1195509a74) successfully added. Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli connection show ovs-port-br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists del-port br-ex br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + add_nm_conn type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli c add type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex connection.autoconnect no Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1431]: Connection 'ovs-port-br-ex' (f435d5b6-43a2-4866-9289-a2d3cf775ea9) successfully added. Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + extra_phys_args=() Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1435]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 802-3-ethernet == vlan ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1449]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 802-3-ethernet == bond ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1453]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 802-3-ethernet == team ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + iface_type=802-3-ethernet Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' '' = 0 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + extra_phys_args+=(802-3-ethernet.cloned-mac-address "${iface_mac}") Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli connection show ovs-if-phys0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists destroy interface ens5 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + add_nm_conn type 802-3-ethernet conn.interface ens5 master ovs-port-phys0 con-name ovs-if-phys0 connection.autoconnect-priority 100 connection.autoconnect-slaves 1 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli c add type 802-3-ethernet conn.interface ens5 master ovs-port-phys0 con-name ovs-if-phys0 connection.autoconnect-priority 100 connection.autoconnect-slaves 1 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 connection.autoconnect no Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1462]: Connection 'ovs-if-phys0' (1fe530aa-6f97-4a74-9f19-bba3f65a0596) successfully added. Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8225] agent-manager: agent[479eacc294d26e58,:1.72/nmcli-connect/0]: agent registered Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1466]: ++ nmcli -g connection.uuid conn show ovs-if-phys0 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8232] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + new_conn=1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8237] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1470]: ++ nmcli -g connection.uuid conn show ovs-port-br-ex Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8240] device (br-ex): Activation: starting connection 'br-ex' (6d3cf1fd-1c82-447c-b0c3-a48401391b29) Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port_conn=f435d5b6-43a2-4866-9289-a2d3cf775ea9 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + replace_connection_master eb99b8bd-8e1f-3f41-845b-962703e428f7 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local old=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local new=1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 kernel: device ovs-system entered promiscuous mode Feb 23 15:42:39 ip-10-0-136-68 kernel: Timeout policy base is empty Feb 23 15:42:39 ip-10-0-136-68 kernel: Failed to associated timeout policy `ovs_test_tp' Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8240] audit: op="connection-activate" uuid="6d3cf1fd-1c82-447c-b0c3-a48401391b29" name="br-ex" pid=1593 uid=0 result="success" Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00008|dpif_netlink|INFO|Datapath dispatch mode: per-cpu Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1474]: ++ nmcli -g UUID connection show Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8241] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00009|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8244] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00010|ofproto_dpif|INFO|system@ovs-system: VLAN header stack length probed as 2 Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1606]: Error: Device '' not found. Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1478]: ++ nmcli -g connection.master connection show uuid eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8248] device (br-ex): Activation: starting connection 'ovs-port-br-ex' (f435d5b6-43a2-4866-9289-a2d3cf775ea9) Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00011|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 3 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8249] device (ens5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00012|ofproto_dpif|INFO|system@ovs-system: Datapath supports truncate action Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1622]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1622]: + INTERFACE_NAME=br-ex Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1622]: + OPERATION=pre-up Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1622]: + '[' pre-up '!=' pre-up ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1482]: ++ nmcli -g connection.master connection show uuid 6d3cf1fd-1c82-447c-b0c3-a48401391b29 Feb 23 15:42:39 ip-10-0-136-68 kernel: device ens5 entered promiscuous mode Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8253] device (ens5): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00013|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1626]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8257] device (ens5): Activation: starting connection 'ovs-port-phys0' (4152ced3-7759-4837-844f-8e1195509a74) Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00014|ofproto_dpif|INFO|system@ovs-system: Datapath supports clone action Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1627]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1486]: ++ nmcli -g connection.master connection show uuid 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8257] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00015|ofproto_dpif|INFO|system@ovs-system: Max sample nesting level probed as 10 Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1622]: + INTERFACE_CONNECTION_UUID= Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1622]: + '[' '' == '' ']' Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1622]: + exit 0 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 4152ced3-7759-4837-844f-8e1195509a74 '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8259] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00016|ofproto_dpif|INFO|system@ovs-system: Datapath supports eventmask in conntrack action Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + INTERFACE_NAME=ens5 Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + OPERATION=pre-up Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + '[' pre-up '!=' pre-up ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1490]: ++ nmcli -g connection.master connection show uuid f435d5b6-43a2-4866-9289-a2d3cf775ea9 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8261] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00017|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_clear action Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1636]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' br-ex '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8262] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00018|ofproto_dpif|INFO|system@ovs-system: Max dp_hash algorithm probed to be 0 Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1637]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1494]: ++ nmcli -g connection.master connection show uuid 4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8265] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00019|ofproto_dpif|INFO|system@ovs-system: Datapath supports check_pkt_len action Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + INTERFACE_CONNECTION_UUID=1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + '[' 1fe530aa-6f97-4a74-9f19-bba3f65a0596 == '' ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' br-ex '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + replace_connection_master ens5 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local old=ens5 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + local new=1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8267] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00020|ofproto_dpif|INFO|system@ovs-system: Datapath supports timeout policy in conntrack action Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1645]: ++ nmcli -t -f connection.slave-type conn show 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1498]: ++ nmcli -g UUID connection show Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8267] device (br-ex): Activation: connection 'ovs-port-br-ex' enslaved, continuing activation Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00021|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_zero_snat Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1646]: ++ awk -F : '{print $NF}' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8269] device (ens5): disconnecting for new activation request. Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00022|ofproto_dpif|INFO|system@ovs-system: Datapath supports add_mpls action Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + '[' ovs-port '!=' ovs-port ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1502]: ++ nmcli -g connection.master connection show uuid eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8270] device (ens5): state change: activated -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00023|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1651]: ++ nmcli -t -f connection.master conn show 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' '!=' ens5 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8271] manager: NetworkManager state is now CONNECTING Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00024|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_zone Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1652]: ++ awk -F : '{print $NF}' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1506]: ++ nmcli -g connection.master connection show uuid 6d3cf1fd-1c82-447c-b0c3-a48401391b29 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8276] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00025|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_mark Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + PORT=4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1634]: + '[' 4152ced3-7759-4837-844f-8e1195509a74 == '' ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' '!=' ens5 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8279] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00026|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_label Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1657]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1510]: ++ nmcli -g connection.master connection show uuid 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8283] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00027|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state_nat Feb 23 15:42:39 ip-10-0-136-68 nm-dispatcher[1658]: ++ awk -F : '{if( ($1=="4152ced3-7759-4837-844f-8e1195509a74" || $3=="4152ced3-7759-4837-844f-8e1195509a74") && $2~/^ovs*/) print $NF}' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 4152ced3-7759-4837-844f-8e1195509a74 '!=' ens5 ']' Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:39 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8284] device (ens5): Activation: connection 'ovs-port-phys0' enslaved, continuing activation Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00028|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_orig_tuple Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + PORT_CONNECTION_UUID=4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + '[' 4152ced3-7759-4837-844f-8e1195509a74 == '' ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1514]: ++ nmcli -g connection.master connection show uuid f435d5b6-43a2-4866-9289-a2d3cf775ea9 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8285] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00029|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_orig_tuple6 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1663]: ++ nmcli -t -f connection.slave-type conn show 4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' br-ex '!=' ens5 ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8288] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00030|ofproto_dpif|INFO|system@ovs-system: Datapath does not support IPv6 ND Extensions Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1664]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1518]: ++ nmcli -g connection.master connection show uuid 4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8290] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00031|ofproto_dpif_upcall|INFO|Overriding n-handler-threads to 4, setting n-revalidator-threads to 2 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' br-ex '!=' ens5 ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + continue Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli connection show ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists destroy interface br-ex Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8332] device (ens5): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00032|ofproto_dpif_upcall|INFO|Starting 6 threads Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1669]: ++ nmcli -t -f connection.master conn show 4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1527]: + nmcli --fields ipv4.method,ipv6.method conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8393] dhcp4 (ens5): canceled DHCP transaction Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00033|bridge|INFO|bridge br-ex: added interface ens5 on port 1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1670]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1528]: + grep manual Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8393] dhcp4 (ens5): activation: beginning transaction (timeout in 45 seconds) Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00034|bridge|INFO|bridge br-ex: using datapath ID 0000925ecf0d0543 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + BRIDGE_NAME=br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + '[' br-ex '!=' br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + ovs-vsctl list interface ens5 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + declare -A INTERFACES Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + '[' '' ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + extra_if_brex_args= Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8393] dhcp4 (ens5): state changed no lease Feb 23 15:42:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00035|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1675]: ++ get_interface_ofport_request Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1675]: ++ declare -A ofport_requests Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1533]: ++ ip -j a show dev ens5 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8570] device (ens5): Activation: starting connection 'ovs-if-phys0' (1fe530aa-6f97-4a74-9f19-bba3f65a0596) Feb 23 15:42:40 ip-10-0-136-68 ovs-vsctl[1678]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1676]: +++ ovs-vsctl get Interface ens5 ofport Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1534]: ++ jq '.[0].addr_info | map(. | select(.family == "inet")) | length' Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8578] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1675]: ++ local current_ofport=1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1675]: ++ '[' '' ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1675]: ++ echo 1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1675]: ++ return Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + num_ipv4_addrs=1 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 1 -gt 0 ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + extra_if_brex_args+='ipv4.may-fail no ' Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8580] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + INTERFACES[$INTERFACE_NAME]=1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1634]: + declare -p INTERFACES Feb 23 15:42:40 ip-10-0-136-68 chronyd[960]: Source 169.254.169.123 offline Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1536]: ++ ip -j a show dev ens5 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8585] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1679]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1679]: + INTERFACE_NAME=br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1679]: + OPERATION=pre-up Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1679]: + '[' pre-up '!=' pre-up ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1537]: ++ jq '.[0].addr_info | map(. | select(.family == "inet6" and .scope != "link")) | length' Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8590] device (ens5): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1682]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + num_ip6_addrs=0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 0 -gt 0 ']' Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8593] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1683]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1538]: ++ nmcli --get-values ipv4.dhcp-client-id conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 systemd-udevd[1614]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1679]: + INTERFACE_CONNECTION_UUID= Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1679]: + '[' '' == '' ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1679]: + exit 0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + dhcp_client_id= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -n '' ']' Feb 23 15:42:39 ip-10-0-136-68 systemd-udevd[1614]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + INTERFACE_NAME=ens5 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + OPERATION=pre-up Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' pre-up '!=' pre-up ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1542]: ++ nmcli --get-values ipv6.dhcp-duid conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 systemd-udevd[1614]: Could not generate persistent MAC address for ovs-system: No such file or directory Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1717]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + dhcp6_client_id= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -n '' ']' Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8983] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1718]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1546]: ++ nmcli --get-values ipv6.addr-gen-mode conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8986] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + INTERFACE_CONNECTION_UUID=1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' 1fe530aa-6f97-4a74-9f19-bba3f65a0596 == '' ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + ipv6_addr_gen_mode=stable-privacy Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -n stable-privacy ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + extra_if_brex_args+='ipv6.addr-gen-mode stable-privacy ' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + add_nm_conn type ovs-interface slave-type ovs-port conn.interface br-ex master f435d5b6-43a2-4866-9289-a2d3cf775ea9 con-name ovs-if-br-ex 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 ipv4.route-metric 48 ipv6.route-metric 48 ipv4.may-fail no ipv6.addr-gen-mode stable-privacy Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli c add type ovs-interface slave-type ovs-port conn.interface br-ex master f435d5b6-43a2-4866-9289-a2d3cf775ea9 con-name ovs-if-br-ex 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 ipv4.route-metric 48 ipv6.route-metric 48 ipv4.may-fail no ipv6.addr-gen-mode stable-privacy connection.autoconnect no Feb 23 15:42:40 ip-10-0-136-68 kernel: device ens5 left promiscuous mode Feb 23 15:42:40 ip-10-0-136-68 kernel: device ovs-system left promiscuous mode Feb 23 15:42:40 ip-10-0-136-68 kernel: device ovs-system entered promiscuous mode Feb 23 15:42:40 ip-10-0-136-68 kernel: No such timeout policy "ovs_test_tp" Feb 23 15:42:40 ip-10-0-136-68 kernel: Failed to associated timeout policy `ovs_test_tp' Feb 23 15:42:40 ip-10-0-136-68 kernel: device ens5 entered promiscuous mode Feb 23 15:42:39 ip-10-0-136-68 NetworkManager[1149]: [1677166959.8989] device (br-ex): Activation: successful, device activated. Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1727]: ++ nmcli -t -f connection.slave-type conn show 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00036|bridge|INFO|bridge br-ex: deleted interface ens5 on port 1 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1550]: Connection 'ovs-if-br-ex' (1819ac43-68a9-4976-bc28-41abb9d46380) successfully added. Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.0278] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1728]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00037|dpif_netlink|INFO|Datapath dispatch mode: per-cpu Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + configure_driver_options ens5 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + intf=ens5 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /sys/class/net/ens5/device/uevent ']' Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.0281] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' ovs-port '!=' ovs-port ']' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00038|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1555]: ++ cat /sys/class/net/ens5/device/uevent Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.0284] device (ens5): Activation: successful, device activated. Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1755]: ++ nmcli -t -f connection.master conn show 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00039|ofproto_dpif|INFO|system@ovs-system: VLAN header stack length probed as 2 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1556]: ++ grep DRIVER Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.0472] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1756]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00040|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 3 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1557]: ++ awk -F = '{print $2}' Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.0474] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + PORT=4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' 4152ced3-7759-4837-844f-8e1195509a74 == '' ']' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00041|ofproto_dpif|INFO|system@ovs-system: Datapath supports truncate action Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + driver=ena Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Driver name is' ena Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Driver name is ena Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ena = vmxnet3 ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /etc/ovnk/extra_bridge ']' Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.0476] device (br-ex): Activation: successful, device activated. Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1762]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00042|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1558]: + nmcli connection show br-ex1 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1558]: + nmcli connection show ovs-if-phys1 Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.0649] audit: op="connection-update" uuid="6d3cf1fd-1c82-447c-b0c3-a48401391b29" name="br-ex" args="connection.autoconnect,connection.timestamp" pid=1690 uid=0 result="success" Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1763]: ++ awk -F : '{if( ($1=="4152ced3-7759-4837-844f-8e1195509a74" || $3=="4152ced3-7759-4837-844f-8e1195509a74") && $2~/^ovs*/) print $NF}' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00043|ofproto_dpif|INFO|system@ovs-system: Datapath supports clone action Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs-vsctl --timeout=30 --if-exists del-br br0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + connections=(br-ex ovs-if-phys0) Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Starting Generate console-login-helper-messages issue snippet... Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + PORT_CONNECTION_UUID=4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' 4152ced3-7759-4837-844f-8e1195509a74 == '' ']' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00044|ofproto_dpif|INFO|system@ovs-system: Max sample nesting level probed as 10 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1568]: ++ nmcli -g NAME c Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1117] agent-manager: agent[b5d43afbc6271656,:1.87/nmcli-connect/0]: agent registered Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1768]: ++ nmcli -t -f connection.slave-type conn show 4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00045|ofproto_dpif|INFO|system@ovs-system: Datapath supports eventmask in conntrack action Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + IFS= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + read -r connection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ Wired connection 1 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + IFS= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + read -r connection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + IFS= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + read -r connection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ ovs-if-br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + IFS= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + read -r connection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ ovs-if-phys0 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + IFS= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + read -r connection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ ovs-port-br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + IFS= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + read -r connection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ ovs-port-phys0 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + IFS= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + read -r connection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + connections+=(ovs-if-br-ex) Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + activate_nm_connections br-ex ovs-if-phys0 ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + connections=("$@") Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local connections Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn in "${connections[@]}" Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1122] device (ens5): state change: ip-check -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1769]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00046|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_clear action Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1573]: ++ nmcli -g connection.slave-type connection show br-ex Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1124] manager: NetworkManager state is now CONNECTED_LOCAL Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00047|ofproto_dpif|INFO|system@ovs-system: Max dp_hash algorithm probed to be 0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local slave_type= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' = team ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' = bond ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn in "${connections[@]}" Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1125] device (ens5): releasing ovs interface ens5 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1774]: ++ nmcli -t -f connection.master conn show 4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00048|ofproto_dpif|INFO|system@ovs-system: Datapath supports check_pkt_len action Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1577]: ++ nmcli -g connection.slave-type connection show ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1125] device (ens5): released from master device ens5 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1775]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00049|ofproto_dpif|INFO|system@ovs-system: Datapath supports timeout policy in conntrack action Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local slave_type=ovs-port Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ovs-port = team ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ovs-port = bond ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn in "${connections[@]}" Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1128] device (ens5): disconnecting for new activation request. Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + BRIDGE_NAME=br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' br-ex '!=' br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + ovs-vsctl list interface ens5 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + declare -A INTERFACES Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + cat /run/ofport_requests.br-ex Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00050|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_zero_snat Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1581]: ++ nmcli -g connection.slave-type connection show ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1128] audit: op="connection-activate" uuid="1fe530aa-6f97-4a74-9f19-bba3f65a0596" name="ovs-if-phys0" pid=1725 uid=0 result="success" Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1780]: declare -A INTERFACES=([ens5]="1" ) Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00051|ofproto_dpif|INFO|system@ovs-system: Datapath supports add_mpls action Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local slave_type=ovs-port Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ovs-port = team ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ovs-port = bond ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + declare -A master_interfaces Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn in "${connections[@]}" Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1166] device (ens5): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + source /run/ofport_requests.br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: ++ INTERFACES=([ens5]="1") Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: ++ declare -A INTERFACES Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + '[' a ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1714]: + declare -p INTERFACES Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00052|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1585]: ++ nmcli -g connection.slave-type connection show br-ex Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1173] device (ens5): Activation: starting connection 'ovs-if-phys0' (1fe530aa-6f97-4a74-9f19-bba3f65a0596) Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1806]: Error: Device '' not found. Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00053|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_zone Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local slave_type= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local is_slave=false Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' = team ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' = bond ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local master_interface Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + false Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1176] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00054|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_mark Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1589]: ++ nmcli -g GENERAL.STATE conn show br-ex Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1178] manager: NetworkManager state is now CONNECTING Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00055|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_label Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local active_state= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' == activated ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for i in {1..10} Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Attempt 1 to bring up connection br-ex' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Attempt 1 to bring up connection br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli conn up br-ex Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1180] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00056|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state_nat Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1593]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2) Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1185] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00057|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_orig_tuple Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + s=0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + break Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 0 -eq 0 ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Brought up connection br-ex successfully' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Brought up connection br-ex successfully Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + false Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli c mod br-ex connection.autoconnect yes Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn in "${connections[@]}" Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1190] device (ens5): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00058|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_orig_tuple6 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1707]: ++ nmcli -g connection.slave-type connection show ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.1193] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00059|ofproto_dpif|INFO|system@ovs-system: Datapath does not support IPv6 ND Extensions Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local slave_type=ovs-port Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local is_slave=false Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ovs-port = team ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ovs-port = bond ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local master_interface Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + false Feb 23 15:42:40 ip-10-0-136-68 systemd-udevd[1744]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00060|ofproto_dpif_upcall|INFO|Overriding n-handler-threads to 4, setting n-revalidator-threads to 2 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1715]: ++ nmcli -g GENERAL.STATE conn show ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 systemd-udevd[1744]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00061|ofproto_dpif_upcall|INFO|Starting 6 threads Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local active_state=activating Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' activating == activated ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for i in {1..10} Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Attempt 1 to bring up connection ovs-if-phys0' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Attempt 1 to bring up connection ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli conn up ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 systemd-udevd[1744]: Could not generate persistent MAC address for ovs-system: No such file or directory Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00062|bridge|INFO|bridge br-ex: added interface ens5 on port 1 Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00063|bridge|INFO|bridge br-ex: using datapath ID 0000c2d0ca8ed340 Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00064|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Feb 23 15:42:40 ip-10-0-136-68 ovs-vsctl[1781]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + INTERFACE_NAME=ens5 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + OPERATION=pre-up Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' pre-up '!=' pre-up ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1866]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1867]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + INTERFACE_CONNECTION_UUID=1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' 1fe530aa-6f97-4a74-9f19-bba3f65a0596 == '' ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1872]: ++ nmcli -t -f connection.slave-type conn show 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1873]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' ovs-port '!=' ovs-port ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1878]: ++ nmcli -t -f connection.master conn show 1fe530aa-6f97-4a74-9f19-bba3f65a0596 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1879]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + PORT=4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' 4152ced3-7759-4837-844f-8e1195509a74 == '' ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1884]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1885]: ++ awk -F : '{if( ($1=="4152ced3-7759-4837-844f-8e1195509a74" || $3=="4152ced3-7759-4837-844f-8e1195509a74") && $2~/^ovs*/) print $NF}' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + PORT_CONNECTION_UUID=4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' 4152ced3-7759-4837-844f-8e1195509a74 == '' ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1890]: ++ nmcli -t -f connection.slave-type conn show 4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1891]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1896]: ++ nmcli -t -f connection.master conn show 4152ced3-7759-4837-844f-8e1195509a74 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1897]: ++ awk -F : '{print $NF}' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + BRIDGE_NAME=br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' br-ex '!=' br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + ovs-vsctl list interface ens5 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + declare -A INTERFACES Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + cat /run/ofport_requests.br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1902]: declare -A INTERFACES=([ens5]="1" ) Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + source /run/ofport_requests.br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: ++ INTERFACES=([ens5]="1") Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: ++ declare -A INTERFACES Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + '[' a ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 15:42:40 ip-10-0-136-68 ovs-vsctl[1903]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1864]: + declare -p INTERFACES Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.5419] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.5421] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.5423] manager: NetworkManager state is now CONNECTED_LOCAL Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1725]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6) Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.5426] device (ens5): Activation: successful, device activated. Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + s=0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + break Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 0 -eq 0 ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Brought up connection ovs-if-phys0 successfully' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Brought up connection ovs-if-phys0 successfully Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + false Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli c mod ovs-if-phys0 connection.autoconnect yes Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for conn in "${connections[@]}" Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.5609] audit: op="connection-update" uuid="1fe530aa-6f97-4a74-9f19-bba3f65a0596" name="ovs-if-phys0" args="connection.autoconnect,connection.timestamp" pid=1906 uid=0 result="success" Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1916]: ++ nmcli -g connection.slave-type connection show ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local slave_type=ovs-port Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local is_slave=false Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ovs-port = team ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' ovs-port = bond ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local master_interface Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + false Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1922]: ++ nmcli -g GENERAL.STATE conn show ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local active_state= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '' == activated ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for i in {1..10} Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Attempt 1 to bring up connection ovs-if-br-ex' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Attempt 1 to bring up connection ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli conn up ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 kernel: device br-ex entered promiscuous mode Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6118] agent-manager: agent[b2b6be4234dee96e,:1.108/nmcli-connect/0]: agent registered Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00065|netdev|WARN|failed to set MTU for network device br-ex: No such device Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6124] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00066|bridge|INFO|bridge br-ex: added interface br-ex on port 65534 Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6127] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00067|bridge|INFO|bridge br-ex: using datapath ID 000002ea92f9d3f3 Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6130] device (br-ex): Activation: starting connection 'ovs-if-br-ex' (1819ac43-68a9-4976-bc28-41abb9d46380) Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6130] audit: op="connection-activate" uuid="1819ac43-68a9-4976-bc28-41abb9d46380" name="ovs-if-br-ex" pid=1937 uid=0 result="success" Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6130] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6132] manager: NetworkManager state is now CONNECTING Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6134] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6135] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6138] device (br-ex): Activation: connection 'ovs-if-br-ex' enslaved, continuing activation Feb 23 15:42:40 ip-10-0-136-68 systemd-udevd[1946]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:42:40 ip-10-0-136-68 systemd-udevd[1946]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6196] device (br-ex): set-hw-addr: set-cloned MAC address to 02:EA:92:F9:D3:F3 (02:EA:92:F9:D3:F3) Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6197] device (br-ex): carrier: link connected Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6202] dhcp4 (br-ex): activation: beginning transaction (timeout in 45 seconds) Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6217] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6220] policy: set 'ovs-if-br-ex' (br-ex) as default for IPv4 routing and DNS Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6261] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1964]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1964]: + INTERFACE_NAME=br-ex Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1964]: + OPERATION=pre-up Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1964]: + '[' pre-up '!=' pre-up ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1966]: ++ nmcli -t -f device,type,uuid conn Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1967]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1964]: + INTERFACE_CONNECTION_UUID= Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1964]: + '[' '' == '' ']' Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[1964]: + exit 0 Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6644] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6645] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6648] manager: NetworkManager state is now CONNECTED_SITE Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1937]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6650] device (br-ex): Activation: successful, device activated. Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6665] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + s=0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + break Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 0 -eq 0 ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Brought up connection ovs-if-br-ex successfully' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Brought up connection ovs-if-br-ex successfully Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + false Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli c mod ovs-if-br-ex connection.autoconnect yes Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + try_to_bind_ipv6_address Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + retries=60 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ 60 -eq 0 ]] Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.6843] audit: op="connection-update" uuid="1819ac43-68a9-4976-bc28-41abb9d46380" name="ovs-if-br-ex" args="connection.autoconnect,connection.timestamp" pid=1971 uid=0 result="success" Feb 23 15:42:40 ip-10-0-136-68 chronyd[960]: Source 169.254.169.123 online Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1986]: ++ ip -6 -j addr Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1987]: ++ jq -r 'first(.[] | select(.ifname=="br-ex") | .addr_info[] | select(.scope=="global") | .local)' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + ip= Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ '' == '' ]] Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'No ipv6 ip to bind was found' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: No ipv6 ip to bind was found Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + break Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + [[ 60 -eq 0 ]] Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + set_nm_conn_files Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' /etc/NetworkManager/system-connections '!=' /run/NetworkManager/system-connections ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_conf_files br-ex phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/etc/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2005]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:40 ip-10-0-136-68 NetworkManager[1149]: [1677166960.7667] audit: op="connections-reload" pid=2061 uid=0 result="success" Feb 23 15:42:40 ip-10-0-136-68 nm-dispatcher[2006]: Error: Device '' not found. Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + copy_nm_conn_files /run/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + local dst_path=/run/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2007]: ++ dirname /etc/NetworkManager/system-connections/br-ex Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: ovs-configuration.service: Succeeded. Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 mco-hostname[2078]: waiting for non-localhost hostname to be assigned Feb 23 15:42:40 ip-10-0-136-68 mco-hostname[2078]: node identified as ip-10-0-136-68 Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Started Configures OVS with proper host networking configuration. Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2008]: ++ basename /etc/NetworkManager/system-connections/br-ex Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: ovs-configuration.service: Consumed 914ms CPU time Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + file=br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping br-ex since it does not exist at source' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping br-ex since it does not exist at source Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Starting Wait for a non-localhost hostname... Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2011]: ++ dirname /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Started Wait for a non-localhost hostname. Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Reached target Network is Online. Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2013]: ++ basename /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Starting Dynamically sets the system reserved for the kubelet... Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + file=br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Copying configuration br-ex.nmconnection' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Copying configuration br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + cp /etc/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Starting NFS status monitor for NFSv2/3 locking.... Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2015]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive. Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Started Dynamically sets the system reserved for the kubelet. Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2016]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)... Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-if-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-if-br-ex since it does not exist at source' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-if-br-ex since it does not exist at source Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2017]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 systemd[1]: Starting RPC Bind... Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2018]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-if-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Copying configuration ovs-if-br-ex.nmconnection' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Copying configuration ovs-if-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + cp /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2020]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex Feb 23 15:42:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:40.911602282Z" level=info msg="Starting CRI-O, version: 1.25.2-6.rhaos4.12.git3c4e50c.el8, git: unknown(clean)" Feb 23 15:42:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:40.911781185Z" level=info msg="Node configuration value for hugetlb cgroup is true" Feb 23 15:42:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:40.915258061Z" level=info msg="Node configuration value for pid cgroup is true" Feb 23 15:42:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:40.915344501Z" level=info msg="Node configuration value for memoryswap cgroup is true" Feb 23 15:42:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:40.915353388Z" level=info msg="Node configuration value for cgroup v2 is false" Feb 23 15:42:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:40.923243950Z" level=info msg="Node configuration value for systemd CollectMode is true" Feb 23 15:42:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:40.929363082Z" level=info msg="Node configuration value for systemd AllowedCPUs is true" Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2021]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-port-br-ex Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-port-br-ex since it does not exist at source' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-port-br-ex since it does not exist at source Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:40.935082519Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL" Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2022]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2023]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-port-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Copying configuration ovs-port-br-ex.nmconnection' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Copying configuration ovs-port-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + cp /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2025]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2026]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-if-phys0 Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-if-phys0 since it does not exist at source' Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-if-phys0 since it does not exist at source Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[2027]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 15:42:40 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2028]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-if-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Copying configuration ovs-if-phys0.nmconnection' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Copying configuration ovs-if-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + cp /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2030]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys0 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2031]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys0 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-port-phys0 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-port-phys0 since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-port-phys0 since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2032]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.028881999Z" level=info msg="Checkpoint/restore support disabled" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.028909422Z" level=info msg="Using seccomp default profile when unspecified: true" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.028916708Z" level=info msg="Using the internal default seccomp profile" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.028924029Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.028930137Z" level=info msg="No blockio config file specified, blockio not configured" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.028935968Z" level=info msg="RDT not available in the host system" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.035501454Z" level=info msg="Conmon does support the --sync option" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.035523566Z" level=info msg="Conmon does support the --log-global-size-max option" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.038584472Z" level=info msg="Conmon does support the --sync option" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.038600047Z" level=info msg="Conmon does support the --log-global-size-max option" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.041348359Z" level=info msg="Updated default CNI network name to " Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2033]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-port-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Copying configuration ovs-port-phys0.nmconnection' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Copying configuration ovs-port-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + cp /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + rm -f /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/br-ex.nmconnection' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Removed nmconnection file /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + nm_config_changed=1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + rm -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + nm_config_changed=1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + rm -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + nm_config_changed=1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.112222719Z" level=warning msg="Error encountered when checking whether cri-o should wipe images: open /var/lib/crio/version: no such file or directory" Feb 23 15:42:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:41.116870939Z" level=info msg="Serving metrics on :9537 via HTTP" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + rm -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + nm_config_changed=1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + rm -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + nm_config_changed=1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + base_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_name=br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + port_name=phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_port=ovs-port-br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + ovs_interface=ovs-if-br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + default_port_name=ovs-port-phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + bridge_interface_name=ovs-if-phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 15:42:41 ip-10-0-136-68 rpc.statd[2113]: Version 2.3.3 starting Feb 23 15:42:41 ip-10-0-136-68 systemd[1]: Started Generate console-login-helper-messages issue snippet. Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2040]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 rpc.statd[2113]: Flags: TI-RPC Feb 23 15:42:41 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Consumed 15ms CPU time Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -s nullglob Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + shopt -u nullglob Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + copy_nm_conn_files /run/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + local dst_path=/run/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 systemd[1]: Started RPC Bind. Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2041]: ++ dirname /etc/NetworkManager/system-connections/br-ex1 Feb 23 15:42:41 ip-10-0-136-68 systemd[1]: Started NFS status monitor for NFSv2/3 locking.. Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 systemd[1]: Started Container Runtime Interface for OCI (CRI-O). Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2042]: ++ basename /etc/NetworkManager/system-connections/br-ex1 Feb 23 15:42:41 ip-10-0-136-68 systemd[1]: Starting Kubernetes Kubelet... Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping br-ex1 since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping br-ex1 since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2043]: ++ dirname /etc/NetworkManager/system-connections/br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2044]: ++ basename /etc/NetworkManager/system-connections/br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping br-ex1.nmconnection since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping br-ex1.nmconnection since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2045]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2046]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-if-br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-if-br-ex1 since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-if-br-ex1 since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2047]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2048]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-if-br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-if-br-ex1.nmconnection since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-if-br-ex1.nmconnection since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2049]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2050]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-port-br-ex1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-port-br-ex1 since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-port-br-ex1 since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2051]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2052]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-port-br-ex1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-port-br-ex1.nmconnection since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-port-br-ex1.nmconnection since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2053]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2054]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-if-phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-if-phys1 since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-if-phys1 since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2055]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2056]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-if-phys1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-if-phys1.nmconnection since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-if-phys1.nmconnection since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2057]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2058]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-port-phys1 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-port-phys1 since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-port-phys1 since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2059]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + src_path=/etc/NetworkManager/system-connections Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2060]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + file=ovs-port-phys1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Skipping ovs-port-phys1.nmconnection since it does not exist at source' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Skipping ovs-port-phys1.nmconnection since it does not exist at source Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + rm_nm_conn_files Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli connection reload Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + handle_exit Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + e=0 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + '[' 0 -eq 0 ']' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + print_state Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + echo 'Current device, connection, interface and routing state:' Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: Current device, connection, interface and routing state: Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2065]: + nmcli -g all device Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2066]: + grep -v unmanaged Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2066]: br-ex:ovs-interface:connected:full:full:/org/freedesktop/NetworkManager/Devices/6:ovs-if-br-ex:1819ac43-68a9-4976-bc28-41abb9d46380:/org/freedesktop/NetworkManager/ActiveConnection/7 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2066]: ens5:ethernet:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/2:ovs-if-phys0:1fe530aa-6f97-4a74-9f19-bba3f65a0596:/org/freedesktop/NetworkManager/ActiveConnection/6 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2066]: br-ex:ovs-bridge:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/3:br-ex:6d3cf1fd-1c82-447c-b0c3-a48401391b29:/org/freedesktop/NetworkManager/ActiveConnection/2 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2066]: br-ex:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/5:ovs-port-br-ex:f435d5b6-43a2-4866-9289-a2d3cf775ea9:/org/freedesktop/NetworkManager/ActiveConnection/3 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2066]: ens5:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/4:ovs-port-phys0:4152ced3-7759-4837-844f-8e1195509a74:/org/freedesktop/NetworkManager/ActiveConnection/4 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + nmcli -g all connection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2070]: ovs-if-br-ex:1819ac43-68a9-4976-bc28-41abb9d46380:ovs-interface:1677166960:Thu Feb 23 15\:42\:40 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/6:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/7:ovs-port:/run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2070]: br-ex:6d3cf1fd-1c82-447c-b0c3-a48401391b29:ovs-bridge:1677166960:Thu Feb 23 15\:42\:40 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/2:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/2::/run/NetworkManager/system-connections/br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2070]: ovs-if-phys0:1fe530aa-6f97-4a74-9f19-bba3f65a0596:802-3-ethernet:1677166960:Thu Feb 23 15\:42\:40 2023:yes:100:no:/org/freedesktop/NetworkManager/Settings/5:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/6:ovs-port:/run/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2070]: ovs-port-br-ex:f435d5b6-43a2-4866-9289-a2d3cf775ea9:ovs-port:1677166959:Thu Feb 23 15\:42\:39 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/4:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/3:ovs-bridge:/run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2070]: ovs-port-phys0:4152ced3-7759-4837-844f-8e1195509a74:ovs-port:1677166960:Thu Feb 23 15\:42\:40 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/3:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/4:ovs-bridge:/run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2070]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677166959:Thu Feb 23 15\:42\:39 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/1:no:::::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + ip -d address show Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: inet 127.0.0.1/8 scope host lo Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: valid_lft forever preferred_lft forever Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: inet6 ::1/128 scope host Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: valid_lft forever preferred_lft forever Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: 2: ens5: mtu 9001 qdisc mq master ovs-system state UP group default qlen 1000 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 128 maxmtu 9216 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: openvswitch_slave numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: 4: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: link/ether da:97:47:35:d6:ea brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: 5: br-ex: mtu 9001 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute br-ex Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: valid_lft 3600sec preferred_lft 3600sec Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: inet6 fe80::8bc8:5fcc:9f50:c6f7/64 scope link tentative noprefixroute Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2074]: valid_lft forever preferred_lft forever Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + ip route show Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.894187 2125 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet. Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2075]: default via 10.0.128.1 dev br-ex proto dhcp src 10.0.136.68 metric 48 Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2075]: 10.0.128.0/19 dev br-ex proto kernel scope link src 10.0.136.68 metric 48 Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897653 2125 flags.go:64] FLAG: --add-dir-header="false" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897662 2125 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897668 2125 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897673 2125 flags.go:64] FLAG: --alsologtostderr="false" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897677 2125 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897682 2125 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897686 2125 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897690 2125 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897695 2125 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897699 2125 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897702 2125 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897704 2125 flags.go:64] FLAG: --azure-container-registry-config="" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897707 2125 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897710 2125 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897714 2125 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897717 2125 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[1244]: + ip -6 route show Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897719 2125 flags.go:64] FLAG: --cgroup-root="" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897722 2125 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897724 2125 flags.go:64] FLAG: --client-ca-file="" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897726 2125 flags.go:64] FLAG: --cloud-config="" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897729 2125 flags.go:64] FLAG: --cloud-provider="aws" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897731 2125 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897735 2125 flags.go:64] FLAG: --cluster-domain="" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897737 2125 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897740 2125 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897743 2125 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897746 2125 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897748 2125 flags.go:64] FLAG: --container-runtime="remote" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897751 2125 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897753 2125 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897757 2125 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897760 2125 flags.go:64] FLAG: --contention-profiling="false" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897762 2125 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897764 2125 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897767 2125 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897770 2125 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897773 2125 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897776 2125 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897779 2125 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897782 2125 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 15:42:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897785 2125 flags.go:64] FLAG: --enable-server="true" Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2076]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 15:42:41 ip-10-0-136-68 configure-ovs.sh[2076]: fe80::/64 dev br-ex proto kernel metric 1024 pref medium Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897787 2125 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897791 2125 flags.go:64] FLAG: --event-burst="10" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897794 2125 flags.go:64] FLAG: --event-qps="5" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897796 2125 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897799 2125 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897801 2125 flags.go:64] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897810 2125 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897812 2125 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897815 2125 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897817 2125 flags.go:64] FLAG: --eviction-soft="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897820 2125 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897822 2125 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897825 2125 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897827 2125 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897829 2125 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897832 2125 flags.go:64] FLAG: --feature-gates="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897836 2125 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897838 2125 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897841 2125 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897844 2125 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897847 2125 flags.go:64] FLAG: --healthz-port="10248" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897849 2125 flags.go:64] FLAG: --help="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897852 2125 flags.go:64] FLAG: --hostname-override="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897854 2125 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 15:42:42 ip-10-0-136-68 configure-ovs.sh[1244]: + exit 0 Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897857 2125 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897859 2125 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897861 2125 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897864 2125 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897866 2125 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897869 2125 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897871 2125 flags.go:64] FLAG: --iptables-drop-bit="15" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897874 2125 flags.go:64] FLAG: --iptables-masquerade-bit="14" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897876 2125 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897878 2125 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897880 2125 flags.go:64] FLAG: --kube-api-burst="10" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897884 2125 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897887 2125 flags.go:64] FLAG: --kube-api-qps="5" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897890 2125 flags.go:64] FLAG: --kube-reserved="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897892 2125 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897894 2125 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897897 2125 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897899 2125 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897901 2125 flags.go:64] FLAG: --lock-file="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897904 2125 flags.go:64] FLAG: --log-backtrace-at=":0" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897906 2125 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897909 2125 flags.go:64] FLAG: --log-dir="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897913 2125 flags.go:64] FLAG: --log-file="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897916 2125 flags.go:64] FLAG: --log-file-max-size="1800" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897918 2125 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897921 2125 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897926 2125 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897929 2125 flags.go:64] FLAG: --logging-format="text" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897931 2125 flags.go:64] FLAG: --logtostderr="true" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897933 2125 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897936 2125 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897939 2125 flags.go:64] FLAG: --manifest-url="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897941 2125 flags.go:64] FLAG: --manifest-url-header="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897944 2125 flags.go:64] FLAG: --master-service-namespace="default" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897947 2125 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897949 2125 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897954 2125 flags.go:64] FLAG: --max-pods="110" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897957 2125 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897961 2125 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897963 2125 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897965 2125 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897967 2125 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897970 2125 flags.go:64] FLAG: --node-ip="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897972 2125 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897977 2125 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897980 2125 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897982 2125 flags.go:64] FLAG: --one-output="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897984 2125 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897987 2125 flags.go:64] FLAG: --pod-cidr="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897989 2125 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897993 2125 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897996 2125 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.897998 2125 flags.go:64] FLAG: --pods-per-core="0" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898001 2125 flags.go:64] FLAG: --port="10250" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898003 2125 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898007 2125 flags.go:64] FLAG: --provider-id="aws:///us-west-2a/i-09b04ed55ff55b4f7" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898009 2125 flags.go:64] FLAG: --qos-reserved="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898012 2125 flags.go:64] FLAG: --read-only-port="10255" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898014 2125 flags.go:64] FLAG: --register-node="true" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898018 2125 flags.go:64] FLAG: --register-schedulable="true" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898021 2125 flags.go:64] FLAG: --register-with-taints="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898024 2125 flags.go:64] FLAG: --registry-burst="10" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898027 2125 flags.go:64] FLAG: --registry-qps="5" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898029 2125 flags.go:64] FLAG: --reserved-cpus="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898031 2125 flags.go:64] FLAG: --reserved-memory="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898034 2125 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898037 2125 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898039 2125 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898042 2125 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898044 2125 flags.go:64] FLAG: --runonce="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898047 2125 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898051 2125 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898053 2125 flags.go:64] FLAG: --seccomp-default="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898056 2125 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898058 2125 flags.go:64] FLAG: --skip-headers="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898062 2125 flags.go:64] FLAG: --skip-log-headers="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898064 2125 flags.go:64] FLAG: --stderrthreshold="2" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898066 2125 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898069 2125 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898071 2125 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898074 2125 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898076 2125 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898079 2125 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898081 2125 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898083 2125 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898086 2125 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898088 2125 flags.go:64] FLAG: --system-cgroups="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898092 2125 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898096 2125 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898100 2125 flags.go:64] FLAG: --tls-cert-file="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898102 2125 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898107 2125 flags.go:64] FLAG: --tls-min-version="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898109 2125 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898111 2125 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898114 2125 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898116 2125 flags.go:64] FLAG: --v="2" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898120 2125 flags.go:64] FLAG: --version="false" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898124 2125 flags.go:64] FLAG: --vmodule="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898128 2125 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898130 2125 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.898176 2125 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.904668 2125 server.go:413] "Kubelet version" kubeletVersion="v1.25.4+a34b9e9" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.904695 2125 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.904740 2125 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.904799 2125 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:41.904886 2125 plugins.go:132] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.910225 2125 aws.go:1279] Building AWS cloudprovider Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:41.911709 2125 aws.go:1239] Zone not specified in configuration file; querying AWS metadata service Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.110060 2125 tags.go:80] AWS cloud filtering on ClusterID: mnguyen-rt-wnslw Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.110089 2125 server.go:555] "Successfully initialized cloud provider" cloudProvider="aws" cloudConfigFile="" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.110100 2125 server.go:993] "Cloud provider determined current node" nodeName="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.110108 2125 server.go:825] "Client rotation is on, will bootstrap in background" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.112562 2125 bootstrap.go:100] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.112733 2125 server.go:882] "Starting client certificate rotation" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.112762 2125 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.112887 2125 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.129116 2125 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.133233 2125 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.134958 2125 manager.go:163] cAdvisor running in container: "/system.slice/kubelet.service" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.143915 2125 fs.go:133] Filesystem UUIDs: map[54e5ab65-ff73-4a26-8c44-2a9765abf45f:/dev/nvme0n1p3 A94B-67F7:/dev/nvme0n1p2 c83680a9-dcc4-4413-a0a5-4681b35c650a:/dev/nvme0n1p4] Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.144040 2125 fs.go:134] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:25 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.144067 2125 nvidia.go:54] NVIDIA GPU metrics disabled Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.158550 2125 manager.go:212] Machine: {Timestamp:2023-02-23 15:42:42.158327395 +0000 UTC m=+0.860857073 CPUVendorID:GenuineIntel NumCores:4 NumPhysicalCores:2 NumSockets:1 CpuFrequency:3500000 MemoryCapacity:16516509696 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2d456b0a3e28d0eb2f198315e90643 SystemUUID:ec2d456b-0a3e-28d0-eb2f-198315e90643 BootID:231c4e34-08e7-4aab-9038-a4e74720cf09 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:8258252800 Type:vfs Inodes:2016175 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:25 Capacity:8258252800 Type:vfs Inodes:2016175 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128300593152 Type:vfs Inodes:62651840 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:43 Capacity:8258252800 Type:vfs Inodes:2016175 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:8258252800 Type:vfs Inodes:2016175 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:ea:92:f9:d3:f3 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:ea:92:f9:d3:f3 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:da:97:47:35:d6:ea Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:16516509696 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 2] Caches:[{Id:0 Size:49152 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:1 Threads:[1 3] Caches:[{Id:1 Size:49152 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0}] Caches:[{Id:0 Size:56623104 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.158631 2125 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.158786 2125 manager.go:228] Version: {KernelVersion:4.18.0-372.43.1.el8_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 412.86.202302170236-0 (Ootpa) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.159159 2125 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.159222 2125 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/system.slice/crio.service SystemCgroupsName:/system.slice KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[cpu:{i:{value:500 scale:-3} d:{Dec:} s:500m Format:DecimalSI} ephemeral-storage:{i:{value:1073741824 scale:0} d:{Dec:} s:1Gi Format:BinarySI} memory:{i:{value:1073741824 scale:0} d:{Dec:} s:1Gi Format:BinarySI}] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:4096 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.159238 2125 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.159247 2125 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.160341 2125 manager.go:127] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.160737 2125 server.go:64] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.162835 2125 state_mem.go:36] "Initialized new in-memory state store" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.162883 2125 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.172675 2125 remote_runtime.go:139] "Using CRI v1 runtime API" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.172704 2125 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.175299 2125 remote_image.go:95] "Using CRI v1 image API" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.175320 2125 server.go:993] "Cloud provider determined current node" nodeName="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.175330 2125 server.go:1136] "Using root directory" path="/var/lib/kubelet" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.179589 2125 kubelet.go:393] "Attempting to sync node with API server" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.179612 2125 kubelet.go:282] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.179642 2125 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.179658 2125 kubelet.go:293] "Adding apiserver pod source" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.179679 2125 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.181364 2125 csr.go:261] certificate signing request csr-nsplc is approved, waiting to be issued Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.183119 2125 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="cri-o" version="1.25.2-6.rhaos4.12.git3c4e50c.el8" apiVersion="v1" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.186914 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.186930 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/cinder" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.186939 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.186949 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.186959 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.186972 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.186978 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.186984 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188558 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188576 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188586 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188596 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188605 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188621 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188630 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/glusterfs" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188640 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188649 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188661 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188671 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188680 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.188690 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:42.188734 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:42.188718 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.188762 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.188772 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.189610 2125 csr.go:257] certificate signing request csr-nsplc is issued Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.189803 2125 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.189938 2125 server.go:1175] "Started kubelet" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.189982 2125 kubelet.go:1333] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.190312 2125 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Started Kubernetes Kubelet. Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.190901 2125 server.go:438] "Adding debug handlers to kubelet server" Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Reached target Multi-User System. Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.192086 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Reached target Graphical Interface. Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Starting Update UTMP about System Runlevel Changes... Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.197922 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e775259e1f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 189918708, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 189918708, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.197999 2125 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.198022 2125 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.198470 2125 volume_manager.go:291] "The desired_state_of_world populator starts" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.198533 2125 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.198603 2125 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.199273 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.199589 2125 factory.go:153] Registering CRI-O factory Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.199603 2125 factory.go:55] Registering systemd factory Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.199668 2125 factory.go:103] Registering Raw factory Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.199715 2125 manager.go:1201] Started watching for new ooms in manager Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.200096 2125 manager.go:302] Starting recovery of all containers Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:42.200180 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.200199 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:42.202214976Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=76a3fe24-7c1e-4e3d-8b05-9c9dbf799b21 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:42:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:42.202406381Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710 not found" id=76a3fe24-7c1e-4e3d-8b05-9c9dbf799b21 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded. Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Started Update UTMP about System Runlevel Changes. Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Startup finished in 2.391s (kernel) + 3.860s (initrd) + 7.776s (userspace) = 14.029s. Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: systemd-update-utmp-runlevel.service: Consumed 4ms CPU time Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.215907 2125 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.244397 2125 manager.go:307] Recovery completed Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.282983 2125 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.283023 2125 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.283033 2125 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.283044 2125 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.283054 2125 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.283063 2125 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.283073 2125 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.299160 2125 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.299203 2125 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.299215 2125 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.299227 2125 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.299237 2125 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.299249 2125 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.299259 2125 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.299169 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.366143 2125 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.393983 2125 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.394006 2125 status_manager.go:161] "Starting to sync pod status with apiserver" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.394023 2125 kubelet.go:2033] "Starting kubelet main sync loop" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.394066 2125 kubelet.go:2057] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:42.398153 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.398182 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.400177 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.417490 2125 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438046 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438078 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438088 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438046 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438151 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438162 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438178 2125 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438622 2125 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438637 2125 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.438649 2125 state_mem.go:36] "Initialized new in-memory state store" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.439321 2125 kubelet_node_status.go:94] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.439332 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438070312, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438070312, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.440412 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438082073, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438082073, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.441401 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438090817, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438090817, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.441530 2125 policy_none.go:49] "None policy: Start" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.441893 2125 memory_manager.go:168] "Starting memorymanager" policy="None" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.441912 2125 state_mem.go:35] "Initializing new in-memory state store" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.442749 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438070312, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438143460, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.443767 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438082073, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438155261, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.443929 2125 container_manager_linux.go:427] "Updating kernel flag" flag="vm/overcommit_memory" expectedValue=1 actualValue=0 Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.444019 2125 container_manager_linux.go:427] "Updating kernel flag" flag="kernel/panic" expectedValue=10 actualValue=0 Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.444824 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438090817, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438165335, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods.slice. Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable.slice. Feb 23 15:42:42 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-besteffort.slice. Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.459233 2125 manager.go:273] "Starting Device Plugin manager" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.459270 2125 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.459279 2125 server.go:77] "Starting device plugin registration server" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.459530 2125 plugin_watcher.go:52] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.459582 2125 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.459593 2125 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.460108 2125 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.463143 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e77628b2b5f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 461584223, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 461584223, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.495052 2125 kubelet.go:2119] "SyncLoop ADD" source="file" pods=[] Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.500261 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.600485 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.639778 2125 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.639811 2125 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.639818 2125 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.639827 2125 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.639833 2125 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.639840 2125 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.639846 2125 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.640367 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.640391 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.640403 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:42.640422 2125 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.641758 2125 kubelet_node_status.go:94] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.641735 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438070312, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 640379493, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.642822 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438082073, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 640396753, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.643821 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438090817, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 640405735, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.701154 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.801480 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.818834 2125 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 23 15:42:42 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:42.902099 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.002467 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.042629 2125 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.042658 2125 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.042666 2125 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.042686 2125 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.042695 2125 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.042713 2125 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.042720 2125 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.043167 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.043197 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.043205 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.043226 2125 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.044846 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438070312, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 43, 43182354, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.044906 2125 kubelet_node_status.go:94] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.046114 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438082073, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 43, 43200622, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.047264 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438090817, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 43, 43208268, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.102529 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.190954 2125 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2023-02-24 15:23:10 +0000 UTC, rotation deadline is 2023-02-24 10:00:15.444307166 +0000 UTC Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.190998 2125 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h17m32.253311607s for next certificate rotation Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.194263 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.202661 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 chronyd[960]: Selected source 169.254.169.123 Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.303175 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.403402 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:43.471458 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.471492 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:43.479218 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.479244 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.504350 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.605332 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.621107 2125 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:43.681089 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.681112 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.705506 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:43.761352 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.761381 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.805983 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.846546 2125 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.846582 2125 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.846591 2125 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.846600 2125 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.846607 2125 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.846615 2125 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.846622 2125 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.847197 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.847229 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.847239 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:43.847257 2125 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.848870 2125 kubelet_node_status.go:94] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.849194 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438070312, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 43, 847211160, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.850522 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438082073, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 43, 847233051, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.851722 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438090817, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 43, 847241837, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:43 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:43.907116 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.007620 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.108156 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:44.194228 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.209191 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.310139 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.410986 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.511862 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.611998 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.712534 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.813594 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:44 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:44.914575 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.015112 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.116068 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.194184 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.216336 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.222955 2125 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.317400 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.417555 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450393 2125 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450432 2125 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450440 2125 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450449 2125 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450456 2125 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450463 2125 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450469 2125 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450936 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450963 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450971 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:45.450989 2125 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.452251 2125 kubelet_node_status.go:94] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.452263 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438070312, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 45, 450952156, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.453504 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438082073, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 45, 450966600, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.454638 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438090817, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 45, 450974170, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:45.478695 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.478729 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.518432 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:45.563426 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.563459 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.619955 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.720530 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.821061 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:45 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:45.922092 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.023123 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:46.065321 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.065355 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.125018 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:46.193840 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.226593 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.328004 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.429572 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:46.472338 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.472366 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.532069 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.632595 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.734207 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.835819 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:46 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:46.938545 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.040949 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.141939 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:47.194706 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.242685 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.345829 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.448785 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.460700 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.551856 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00068|memory|INFO|185820 kB peak resident set size after 10.1 seconds Feb 23 15:42:47 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00069|memory|INFO|handlers:4 idl-cells:153 ports:2 revalidators:2 rules:5 udpif keys:9 Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.652125 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.755835 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.859628 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:47 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:47.963854 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.064995 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.166736 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.194403 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.270343 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.376328 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.425442 2125 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.477327 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.579403 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.653410 2125 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.653439 2125 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.653447 2125 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.653457 2125 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.653468 2125 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.653484 2125 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.653492 2125 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.654003 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.654033 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.654044 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:48.654067 2125 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.655433 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438070312, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 48, 654017752, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761246028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.655503 2125 kubelet_node_status.go:94] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.656627 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438082073, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 48, 654036910, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e7761248e19" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.657742 2125 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-136-68.us-west-2.compute.internal", UID:"ip-10-0-136-68.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-10-0-136-68.us-west-2.compute.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-136-68.us-west-2.compute.internal"}, FirstTimestamp:time.Date(2023, time.February, 23, 15, 42, 42, 438090817, time.Local), LastTimestamp:time.Date(2023, time.February, 23, 15, 42, 48, 654047524, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ip-10-0-136-68.us-west-2.compute.internal.17467e776124b041" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.680423 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.780992 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:48.782403 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.782438 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.885656 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:48 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:48.988382 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.091283 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.196120 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:49.197505 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:49.288470 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.288495 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.296188 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.400882 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.501407 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.605957 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.706385 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.807019 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:49 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:49.908065 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.008780 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.109104 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:50.194563 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:50.208181 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.208208 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.209234 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.309580 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.409719 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.510177 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.610485 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.710936 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 systemd[1]: NetworkManager-dispatcher.service: Succeeded. Feb 23 15:42:50 ip-10-0-136-68 systemd[1]: NetworkManager-dispatcher.service: Consumed 845ms CPU time Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:42:50.788358 2125 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.788390 2125 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.811621 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:50 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:50.911936 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.012325 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.112683 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:51.194383 2125 csi_plugin.go:1032] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-136-68.us-west-2.compute.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.212960 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.313311 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.413877 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.514336 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.614716 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.715164 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.815724 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:51 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:51.916138 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.016474 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.113810 2125 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.116953 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.203130 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.217407 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.217813 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.275764 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.318029 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.418703 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.460279 2125 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.461568 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.519453 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.546392 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.546411 2125 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.566396 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.581371 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.620215 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.641336 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.720531 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.821387 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:52.909516 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.909540 2125 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:52 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:52.921627 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:53.015388 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.022531 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:53.032978 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:53.090919 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.123172 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.223872 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.324379 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:53.368205 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.368220 2125 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.425366 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.526327 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.626646 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.726832 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.827140 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:53.927613 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:53.947577 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:53 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:53.984471 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.027918 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:54.043782 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.128707 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.229391 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.329842 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:54.373272 2125 nodeinfomanager.go:401] Failed to publish CSINode: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.373311 2125 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-10-0-136-68.us-west-2.compute.internal" not found Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.430512 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.530663 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.631573 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.732097 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.833098 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:54 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:54.933481 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.034002 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056130 2125 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056156 2125 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056164 2125 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056173 2125 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056179 2125 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056186 2125 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056192 2125 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056654 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056682 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056694 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.056714 2125 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.134487 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:55.182826 2125 kubelet_node_status.go:75] "Successfully registered node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.235272 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.336140 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.436541 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.537232 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.637689 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.738013 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.838397 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:55 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:55.939451 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.040423 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.141448 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:56.206840 2125 certificate_manager.go:270] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.242475 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.343532 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:56.344201 2125 log.go:198] http: TLS handshake error from 10.0.216.117:42186: no serving certificate available for the kubelet Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:56.357096 2125 csr.go:261] certificate signing request csr-n489d is approved, waiting to be issued Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:56.367897 2125 csr.go:257] certificate signing request csr-n489d is issued Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.444054 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.544580 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.645037 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.745467 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.845873 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:56 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:56.946337 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.047323 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.148048 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.248823 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.349280 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:57.369418 2125 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate expiration is 2023-02-24 15:23:10 +0000 UTC, rotation deadline is 2023-02-24 08:19:33.994388383 +0000 UTC Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:57.369439 2125 certificate_manager.go:270] kubernetes.io/kubelet-serving: Waiting 16h36m36.624950514s for next certificate rotation Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.449666 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.461961 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.550506 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.651551 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.751814 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.852211 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:57 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:57.953232 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.053754 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.154263 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.254743 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.355386 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:58.370534 2125 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate expiration is 2023-02-24 15:23:10 +0000 UTC, rotation deadline is 2023-02-24 09:58:50.011824492 +0000 UTC Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:58.370552 2125 certificate_manager.go:270] kubernetes.io/kubelet-serving: Waiting 18h15m51.641273959s for next certificate rotation Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.455910 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.556964 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.657053 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.757418 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.858387 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:58 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:58.959381 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.059747 2125 kubelet.go:2471] "Error getting node" err="node \"ip-10-0-136-68.us-west-2.compute.internal\" not found" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.203570 2125 apiserver.go:52] "Watching apiserver" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.209179 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/multus-additional-cni-plugins-p9nj2 openshift-ovn-kubernetes/ovnkube-node-qc5bl openshift-dns/node-resolver-pgc9j openshift-multus/network-metrics-daemon-5hc5d openshift-cluster-node-tuning-operator/tuned-bjpgx openshift-network-diagnostics/network-check-target-b2mxx openshift-image-registry/node-ca-wdtzq openshift-multus/multus-gr76d openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4 openshift-machine-config-operator/machine-config-daemon-d5wlc] Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.209204 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.209275 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.209337 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.209374 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.209916 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.210001 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.210340 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.214379 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.215201 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.216383 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.215904 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.216737 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:42:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod507b846f_eb8a_4ca3_9d5f_e4d9f18eca32.slice. Feb 23 15:42:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podffd2cee3_1bae_4941_8015_2b3ade383d85.slice. Feb 23 15:42:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod409b8d00_553f_43cb_8805_64a5931be933.slice. Feb 23 15:42:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podecd261a9_4d88_4e3d_aa47_803a685b6569.slice. Feb 23 15:42:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podb97e7fe5_fe52_4769_bb52_fc233e05c05e.slice. Feb 23 15:42:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod6d75c369_887c_42d2_94c1_40cd36f882c3.slice. Feb 23 15:42:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod2c47bc3e_0247_4d47_80e3_c168262e7976.slice. Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.259902 2125 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c47bc3e_0247_4d47_80e3_c168262e7976.slice": readdirent /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c47bc3e_0247_4d47_80e3_c168262e7976.slice: no such file or directory Feb 23 15:42:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod07267a40_e316_4a88_91a5_11bc06672f23.slice. Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359632 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359659 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74mgq\" (UniqueName: \"kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359680 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359705 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359729 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359746 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2sj\" (UniqueName: \"kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359760 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359776 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359791 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359808 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359823 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359839 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359854 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359870 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359886 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359901 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359918 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359935 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359950 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359968 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359983 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqpfc\" (UniqueName: \"kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.359999 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwlz\" (UniqueName: \"kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360014 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360027 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360045 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360061 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360076 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360091 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360104 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360118 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360133 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360160 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360175 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360190 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360207 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360222 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360237 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360252 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360267 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4glw\" (UniqueName: \"kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360280 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360315 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-796v8\" (UniqueName: \"kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360330 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360346 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360361 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360376 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360391 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360406 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360420 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360434 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9xlt\" (UniqueName: \"kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360449 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360465 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360477 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360493 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhxvk\" (UniqueName: \"kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360508 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360523 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29j2\" (UniqueName: \"kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360537 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360551 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360565 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.360570 2125 reconciler.go:169] "Reconciler: start to sync state" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461634 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461673 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461694 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2sj\" (UniqueName: \"kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461719 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461745 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461758 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461772 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461797 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461820 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461824 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461854 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461861 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461880 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461900 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461906 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461932 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461956 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461980 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.461999 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462014 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462028 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462045 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-jqpfc\" (UniqueName: \"kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462059 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-2jwlz\" (UniqueName: \"kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462073 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462095 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462120 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462142 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462173 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462199 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462222 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462239 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462257 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462272 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462327 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462352 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462375 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462391 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462409 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462413 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-796v8\" (UniqueName: \"kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462465 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462495 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462527 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-v4glw\" (UniqueName: \"kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462559 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462591 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462600 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462618 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462635 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462677 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462716 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462744 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462749 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462771 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-k9xlt\" (UniqueName: \"kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462793 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462801 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462810 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462831 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462848 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-m29j2\" (UniqueName: \"kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462864 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462881 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462880 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462925 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462952 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462978 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463005 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463029 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463051 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463180 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463219 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-xhxvk\" (UniqueName: \"kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463238 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463258 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463302 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.463326 2125 secret.go:192] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463326 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463369 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.463381 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs podName:9cd26ba5-46e4-40b5-81e6-74079153d58d nodeName:}" failed. No retries permitted until 2023-02-23 15:42:59.963363176 +0000 UTC m=+18.665892856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs") pod "network-metrics-daemon-5hc5d" (UID: "9cd26ba5-46e4-40b5-81e6-74079153d58d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463409 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-74mgq\" (UniqueName: \"kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463412 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463426 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463449 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462718 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463618 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463666 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463703 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463762 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.462639 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463858 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463859 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.463889 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.464023 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.464035 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.464039 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.464185 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.464628 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.464903 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.467713 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.468414 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.468892 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.471779 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.472402 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.473634 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.473961 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.474068 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.474268 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.475155 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.476653 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.477721 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.478709 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.691239 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.691265 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.691319 2125 projected.go:196] Error preparing data for projected volume kube-api-access-7nhww for pod openshift-network-diagnostics/network-check-target-b2mxx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.691387 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww podName:5acce570-9f3b-4dab-9fed-169a4c110f8c nodeName:}" failed. No retries permitted until 2023-02-23 15:43:00.191369657 +0000 UTC m=+18.893899345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7nhww" (UniqueName: "kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww") pod "network-check-target-b2mxx" (UID: "5acce570-9f3b-4dab-9fed-169a4c110f8c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.692773 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9xlt\" (UniqueName: \"kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.708872 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2sj\" (UniqueName: \"kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.708944 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqpfc\" (UniqueName: \"kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.713162 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-74mgq\" (UniqueName: \"kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.718942 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29j2\" (UniqueName: \"kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.719034 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-796v8\" (UniqueName: \"kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.720463 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhxvk\" (UniqueName: \"kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.736517 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4glw\" (UniqueName: \"kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.737125 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jwlz\" (UniqueName: \"kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.827407 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pgc9j" Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.828329683Z" level=info msg="Running pod sandbox: openshift-dns/node-resolver-pgc9j/POD" id=4797da69-5e28-40eb-802a-ecdfcf8a8a26 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.828670866Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.831603 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gr76d" Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.831984947Z" level=info msg="Running pod sandbox: openshift-multus/multus-gr76d/POD" id=61cee36e-9aaa-4722-8a52-b3a9604f8e5c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.832022973Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.838570 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.838849609Z" level=info msg="Running pod sandbox: openshift-ovn-kubernetes/ovnkube-node-qc5bl/POD" id=47dc33dd-7ce7-41a9-9a0f-22f5b73e73a1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.838879674Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.843996 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wdtzq" Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.845880974Z" level=info msg="Running pod sandbox: openshift-image-registry/node-ca-wdtzq/POD" id=8a148e9d-35ac-469a-8961-cedcd40e505a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.845908772Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.853483 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.853713516Z" level=info msg="Running pod sandbox: openshift-machine-config-operator/machine-config-daemon-d5wlc/POD" id=2c006ca1-ec37-4ecd-8429-35a3583e7250 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.853748541Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.855929 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.856139992Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/POD" id=6d884d66-f62e-408d-bf38-1d4c879a765c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.856171272Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.864441 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.864668976Z" level=info msg="Running pod sandbox: openshift-multus/multus-additional-cni-plugins-p9nj2/POD" id=4556eca8-0cf3-4983-a08f-099c2ef7e376 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.864701243Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.869872 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.870083058Z" level=info msg="Running pod sandbox: openshift-cluster-node-tuning-operator/tuned-bjpgx/POD" id=61cae3f7-e648-4723-b914-720a9500b558 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:42:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:42:59.870112147Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:42:59.965496 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.965603 2125 secret.go:192] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:42:59 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:42:59.965665 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs podName:9cd26ba5-46e4-40b5-81e6-74079153d58d nodeName:}" failed. No retries permitted until 2023-02-23 15:43:00.965640042 +0000 UTC m=+19.668169719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs") pod "network-metrics-daemon-5hc5d" (UID: "9cd26ba5-46e4-40b5-81e6-74079153d58d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:00.267797 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:00.267900 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:00.267912 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:00.267919 2125 projected.go:196] Error preparing data for projected volume kube-api-access-7nhww for pod openshift-network-diagnostics/network-check-target-b2mxx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:00.267956 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww podName:5acce570-9f3b-4dab-9fed-169a4c110f8c nodeName:}" failed. No retries permitted until 2023-02-23 15:43:01.267943454 +0000 UTC m=+19.970473134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7nhww" (UniqueName: "kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww") pod "network-check-target-b2mxx" (UID: "5acce570-9f3b-4dab-9fed-169a4c110f8c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:00.395079 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:00.395182 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:00.972203 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:00.972324 2125 secret.go:192] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:00 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:00.972370 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs podName:9cd26ba5-46e4-40b5-81e6-74079153d58d nodeName:}" failed. No retries permitted until 2023-02-23 15:43:02.972356769 +0000 UTC m=+21.674886440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs") pod "network-metrics-daemon-5hc5d" (UID: "9cd26ba5-46e4-40b5-81e6-74079153d58d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:01.274158 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:01.274330 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:01.274351 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:01.274360 2125 projected.go:196] Error preparing data for projected volume kube-api-access-7nhww for pod openshift-network-diagnostics/network-check-target-b2mxx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:01.274402 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww podName:5acce570-9f3b-4dab-9fed-169a4c110f8c nodeName:}" failed. No retries permitted until 2023-02-23 15:43:03.274390201 +0000 UTC m=+21.976919861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7nhww" (UniqueName: "kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww") pod "network-check-target-b2mxx" (UID: "5acce570-9f3b-4dab-9fed-169a4c110f8c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.613361715Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=4556eca8-0cf3-4983-a08f-099c2ef7e376 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.618342588Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=6d884d66-f62e-408d-bf38-1d4c879a765c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.641987926Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=61cee36e-9aaa-4722-8a52-b3a9604f8e5c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:01.650413 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d75c369_887c_42d2_94c1_40cd36f882c3.slice/crio-19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae.scope WatchSource:0}: Error finding container 19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae: Status 404 returned error can't find the container with id 19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:01.651354 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c47bc3e_0247_4d47_80e3_c168262e7976.slice/crio-c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6.scope WatchSource:0}: Error finding container c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6: Status 404 returned error can't find the container with id c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.651713580Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=4797da69-5e28-40eb-802a-ecdfcf8a8a26 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.652764175Z" level=info msg="Ran pod sandbox 19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae with infra container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/POD" id=6d884d66-f62e-408d-bf38-1d4c879a765c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.652859568Z" level=info msg="Ran pod sandbox c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 with infra container: openshift-multus/multus-additional-cni-plugins-p9nj2/POD" id=4556eca8-0cf3-4983-a08f-099c2ef7e376 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.653908240Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211" id=629bc97f-8bf3-4722-a67b-de827e2a5497 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.654015654Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677" id=ab1fccdd-6b61-4da3-bda6-8b8535ce4d5e name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.654066918Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211 not found" id=629bc97f-8bf3-4722-a67b-de827e2a5497 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.654229179Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677 not found" id=ab1fccdd-6b61-4da3-bda6-8b8535ce4d5e name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:01.655675 2125 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:01.655778 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffd2cee3_1bae_4941_8015_2b3ade383d85.slice/crio-2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523.scope WatchSource:0}: Error finding container 2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523: Status 404 returned error can't find the container with id 2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523 Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.656033925Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677" id=0bf8d9a2-1e77-4bb5-b845-7222a28a6d80 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.656124450Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211" id=1582f711-d610-42a8-b585-3b2dcbc75e8e name=/runtime.v1.ImageService/PullImage Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.657040996Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211\"" Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.657085433Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677\"" Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.657435263Z" level=info msg="Ran pod sandbox 2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523 with infra container: openshift-multus/multus-gr76d/POD" id=61cee36e-9aaa-4722-8a52-b3a9604f8e5c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.658062381Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=a1a27bf4-4e06-4014-abb3-7c7aa1326779 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.660558439Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=2c006ca1-ec37-4ecd-8429-35a3583e7250 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.660600553Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94 not found" id=a1a27bf4-4e06-4014-abb3-7c7aa1326779 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.661054012Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=3b76e290-8706-410f-9d33-871d8c86e4f9 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.661781261Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94\"" Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.664353722Z" level=info msg="Ran pod sandbox 47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb with infra container: openshift-dns/node-resolver-pgc9j/POD" id=4797da69-5e28-40eb-802a-ecdfcf8a8a26 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.664839253Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72" id=dd43e527-bd79-470d-96cc-8b3c6d9dfdfa name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.667142161Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=47dc33dd-7ce7-41a9-9a0f-22f5b73e73a1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.667200061Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72 not found" id=dd43e527-bd79-470d-96cc-8b3c6d9dfdfa name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.667601601Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72" id=5b1c8be8-e4b8-4679-a64f-23808f09d1b7 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:01.668357 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97e7fe5_fe52_4769_bb52_fc233e05c05e.slice/crio-7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8.scope WatchSource:0}: Error finding container 7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8: Status 404 returned error can't find the container with id 7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8 Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.668469251Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72\"" Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.669190287Z" level=info msg="Ran pod sandbox 7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8 with infra container: openshift-machine-config-operator/machine-config-daemon-d5wlc/POD" id=2c006ca1-ec37-4ecd-8429-35a3583e7250 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.669648049Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33b9e1b6e5c77f3e083119aa70ed79556540eb896e3b1f4f07792f213e06286a" id=7926d475-973c-41ce-a4ad-789b0b2c656d name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.670620688Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=8a148e9d-35ac-469a-8961-cedcd40e505a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.670687533Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b6b4f5d89be886f7fe1b314e271801bcae46a3912b44c41a3565ca13b6db4e66,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33b9e1b6e5c77f3e083119aa70ed79556540eb896e3b1f4f07792f213e06286a],Size_:537394443,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7926d475-973c-41ce-a4ad-789b0b2c656d name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.671119257Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33b9e1b6e5c77f3e083119aa70ed79556540eb896e3b1f4f07792f213e06286a" id=58479216-8dd5-4624-8d6e-ee9dd0308fef name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.671218324Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b6b4f5d89be886f7fe1b314e271801bcae46a3912b44c41a3565ca13b6db4e66,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33b9e1b6e5c77f3e083119aa70ed79556540eb896e3b1f4f07792f213e06286a],Size_:537394443,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=58479216-8dd5-4624-8d6e-ee9dd0308fef name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:01.671744 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod409b8d00_553f_43cb_8805_64a5931be933.slice/crio-324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32.scope WatchSource:0}: Error finding container 324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32: Status 404 returned error can't find the container with id 324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32 Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.671883661Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-d5wlc/machine-config-daemon" id=ca098cba-c470-4644-9dc2-5ce68ef9c753 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.671954839Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.673051208Z" level=info msg="Ran pod sandbox 324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32 with infra container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/POD" id=47dc33dd-7ce7-41a9-9a0f-22f5b73e73a1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.673517433Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=64eaaf5f-bd26-4096-adc3-0bb1080d32b6 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.674395309Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=61cae3f7-e648-4723-b914-720a9500b558 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.674668852Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521 not found" id=64eaaf5f-bd26-4096-adc3-0bb1080d32b6 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.676609549Z" level=info msg="Ran pod sandbox 01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a with infra container: openshift-image-registry/node-ca-wdtzq/POD" id=8a148e9d-35ac-469a-8961-cedcd40e505a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.676770671Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=89589251-ef2c-44fe-bd9f-d7ecf59d41c5 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.677083901Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=ad502524-6872-4edb-9ae4-9f500164a1ca name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.684821118Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae not found" id=ad502524-6872-4edb-9ae4-9f500164a1ca name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.685232444Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=43e1dd95-9175-4362-8e1c-3d23effef8b6 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:01 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:01.685854 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07267a40_e316_4a88_91a5_11bc06672f23.slice/crio-c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd.scope WatchSource:0}: Error finding container c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd: Status 404 returned error can't find the container with id c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.686073394Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae\"" Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.686073309Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521\"" Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.686921748Z" level=info msg="Ran pod sandbox c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd with infra container: openshift-cluster-node-tuning-operator/tuned-bjpgx/POD" id=61cae3f7-e648-4723-b914-720a9500b558 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.687358747Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f" id=d4d410eb-67fe-40ff-96f0-b808c7e7446b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.687474751Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f not found" id=d4d410eb-67fe-40ff-96f0-b808c7e7446b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.687820611Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f" id=a3f73bcd-842e-4cd7-a787-6e12aa570b51 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.688550303Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f\"" Feb 23 15:43:01 ip-10-0-136-68 systemd[1]: Started crio-conmon-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope. Feb 23 15:43:01 ip-10-0-136-68 systemd[1]: Started libcontainer container 69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3. Feb 23 15:43:01 ip-10-0-136-68 kernel: cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.890535090Z" level=info msg="Created container 69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3: openshift-machine-config-operator/machine-config-daemon-d5wlc/machine-config-daemon" id=ca098cba-c470-4644-9dc2-5ce68ef9c753 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.891048162Z" level=info msg="Starting container: 69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3" id=8843c8cf-2df8-4262-92b5-c8b217786c2a name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.898694691Z" level=info msg="Started container" PID=2269 containerID=69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3 description=openshift-machine-config-operator/machine-config-daemon-d5wlc/machine-config-daemon id=8843c8cf-2df8-4262-92b5-c8b217786c2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8 Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.918305559Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=5232d9e1-3fdb-4c53-93ee-b48c2d544b91 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.918459164Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495 not found" id=5232d9e1-3fdb-4c53-93ee-b48c2d544b91 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.919070849Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=f7e81be6-b54e-4b54-bc88-575eb6348692 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:01.920149519Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495\"" Feb 23 15:43:02 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 15:43:02 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 15:43:02 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:02.395657 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:02.395907 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.423042 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gr76d" event=&{ID:ffd2cee3-1bae-4941-8015-2b3ade383d85 Type:ContainerStarted Data:2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.424498 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerStarted Data:19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.424996 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerStarted Data:c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.425578 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" event=&{ID:07267a40-e316-4a88-91a5-11bc06672f23 Type:ContainerStarted Data:c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.426058 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wdtzq" event=&{ID:ecd261a9-4d88-4e3d-aa47-803a685b6569 Type:ContainerStarted Data:01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.426583 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.427585 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerStarted Data:69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.427607 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerStarted Data:7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.428172 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pgc9j" event=&{ID:507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 Type:ContainerStarted Data:47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb} Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:02.462225 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:43:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:02.521596622Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211\"" Feb 23 15:43:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:02.530396771Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f\"" Feb 23 15:43:02 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 15:43:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:02.541448918Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677\"" Feb 23 15:43:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:02.545723455Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94\"" Feb 23 15:43:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:02.557450446Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72\"" Feb 23 15:43:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:02.562516532Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521\"" Feb 23 15:43:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:02.569321546Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae\"" Feb 23 15:43:02 ip-10-0-136-68 rpm-ostree[2350]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 15:43:02 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.124' (uid=0 pid=2350 comm="/usr/bin/rpm-ostree start-daemon " label="system_u:system_r:install_t:s0") Feb 23 15:43:02 ip-10-0-136-68 systemd[1]: Starting Authorization Manager... Feb 23 15:43:02 ip-10-0-136-68 polkitd[2354]: Started polkitd version 0.115 Feb 23 15:43:02 ip-10-0-136-68 polkitd[2354]: Loading rules from directory /etc/polkit-1/rules.d Feb 23 15:43:02 ip-10-0-136-68 polkitd[2354]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 23 15:43:02 ip-10-0-136-68 polkitd[2354]: Finished loading, compiling and executing 3 rules Feb 23 15:43:02 ip-10-0-136-68 dbus-daemon[958]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 23 15:43:02 ip-10-0-136-68 polkitd[2354]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 23 15:43:02 ip-10-0-136-68 systemd[1]: Started Authorization Manager. Feb 23 15:43:02 ip-10-0-136-68 rpm-ostree[2350]: In idle state; will auto-exit in 62 seconds Feb 23 15:43:02 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 15:43:02 ip-10-0-136-68 rpm-ostree[2350]: client(id:cli dbus:1.127 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) added; new total=1 Feb 23 15:43:02 ip-10-0-136-68 rpm-ostree[2350]: client(id:cli dbus:1.127 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) vanished; remaining=0 Feb 23 15:43:02 ip-10-0-136-68 rpm-ostree[2350]: In idle state; will auto-exit in 62 seconds Feb 23 15:43:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:02.799010813Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495\"" Feb 23 15:43:02 ip-10-0-136-68 root[2368]: machine-config-daemon[2269]: Starting to manage node: ip-10-0-136-68.us-west-2.compute.internal Feb 23 15:43:02 ip-10-0-136-68 rpm-ostree[2350]: client(id:machine-config-operator dbus:1.128 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) added; new total=1 Feb 23 15:43:02 ip-10-0-136-68 rpm-ostree[2350]: client(id:machine-config-operator dbus:1.128 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) vanished; remaining=0 Feb 23 15:43:02 ip-10-0-136-68 rpm-ostree[2350]: In idle state; will auto-exit in 62 seconds Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:02.984082 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:02.984209 2125 secret.go:192] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:02 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:02.984269 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs podName:9cd26ba5-46e4-40b5-81e6-74079153d58d nodeName:}" failed. No retries permitted until 2023-02-23 15:43:06.984252727 +0000 UTC m=+25.686782407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs") pod "network-metrics-daemon-5hc5d" (UID: "9cd26ba5-46e4-40b5-81e6-74079153d58d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:03 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:03.285249 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:03 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:03.285426 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 15:43:03 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:03.285449 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 15:43:03 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:03.285459 2125 projected.go:196] Error preparing data for projected volume kube-api-access-7nhww for pod openshift-network-diagnostics/network-check-target-b2mxx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:03 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:03.285515 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww podName:5acce570-9f3b-4dab-9fed-169a4c110f8c nodeName:}" failed. No retries permitted until 2023-02-23 15:43:07.285498441 +0000 UTC m=+25.988028114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7nhww" (UniqueName: "kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww") pod "network-check-target-b2mxx" (UID: "5acce570-9f3b-4dab-9fed-169a4c110f8c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:03 ip-10-0-136-68 rpm-ostree[2350]: client(id:machine-config-operator dbus:1.129 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) added; new total=1 Feb 23 15:43:04 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 15:43:04 ip-10-0-136-68 rpm-ostree[2350]: Locked sysroot Feb 23 15:43:04 ip-10-0-136-68 rpm-ostree[2350]: Initiated txn Cleanup for client(id:machine-config-operator dbus:1.129 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0): /org/projectatomic/rpmostree1/rhcos Feb 23 15:43:04 ip-10-0-136-68 rpm-ostree[2350]: Process [pid: 2380 uid: 0 unit: crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope] connected to transaction progress Feb 23 15:43:04 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:04.395475 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:04 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:04.396620 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:04 ip-10-0-136-68 rpm-ostree[2350]: Bootloader updated; bootconfig swap: yes; bootversion: boot.1.1, deployment count change: -1 Feb 23 15:43:06 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:06.396019 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:06 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:06.396442 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:07.012349 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:07.012473 2125 secret.go:192] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:07.012526 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs podName:9cd26ba5-46e4-40b5-81e6-74079153d58d nodeName:}" failed. No retries permitted until 2023-02-23 15:43:15.01250918 +0000 UTC m=+33.715038853 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs") pod "network-metrics-daemon-5hc5d" (UID: "9cd26ba5-46e4-40b5-81e6-74079153d58d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:07.314948 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:07.315129 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:07.315153 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:07.315163 2125 projected.go:196] Error preparing data for projected volume kube-api-access-7nhww for pod openshift-network-diagnostics/network-check-target-b2mxx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:07.315315 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww podName:5acce570-9f3b-4dab-9fed-169a4c110f8c nodeName:}" failed. No retries permitted until 2023-02-23 15:43:15.315196566 +0000 UTC m=+34.017726233 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7nhww" (UniqueName: "kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww") pod "network-check-target-b2mxx" (UID: "5acce570-9f3b-4dab-9fed-169a4c110f8c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:07 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:07.485925 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:43:07 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Succeeded. Feb 23 15:43:07 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Consumed 39ms CPU time Feb 23 15:43:08 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:08.396107 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:08 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:08.396514 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:09 ip-10-0-136-68 rpm-ostree[2350]: Pruned container image layers: 0 Feb 23 15:43:10 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:10.396549 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:10 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:10.396648 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:12 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:12.394981 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:12 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:12.395800 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:12 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:12.487768 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:43:12 ip-10-0-136-68 rpm-ostree[2350]: Txn Cleanup on /org/projectatomic/rpmostree1/rhcos successful Feb 23 15:43:12 ip-10-0-136-68 rpm-ostree[2350]: Unlocked sysroot Feb 23 15:43:12 ip-10-0-136-68 rpm-ostree[2350]: Process [pid: 2380 uid: 0 unit: crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope] disconnected from transaction progress Feb 23 15:43:12 ip-10-0-136-68 rpm-ostree[2350]: client(id:machine-config-operator dbus:1.129 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) vanished; remaining=0 Feb 23 15:43:12 ip-10-0-136-68 rpm-ostree[2350]: In idle state; will auto-exit in 64 seconds Feb 23 15:43:13 ip-10-0-136-68 root[2451]: machine-config-daemon[2269]: No bootstrap pivot required; unlinking bootstrap node annotations Feb 23 15:43:14 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:14.395038 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:14 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:14.395461 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:15 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:15.068330 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:43:15 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:15.068459 2125 secret.go:192] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:15 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:15.068513 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs podName:9cd26ba5-46e4-40b5-81e6-74079153d58d nodeName:}" failed. No retries permitted until 2023-02-23 15:43:31.068495868 +0000 UTC m=+49.771025530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs") pod "network-metrics-daemon-5hc5d" (UID: "9cd26ba5-46e4-40b5-81e6-74079153d58d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 15:43:15 ip-10-0-136-68 rpm-ostree[2350]: client(id:machine-config-operator dbus:1.139 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) added; new total=1 Feb 23 15:43:15 ip-10-0-136-68 rpm-ostree[2350]: client(id:machine-config-operator dbus:1.139 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) vanished; remaining=0 Feb 23 15:43:15 ip-10-0-136-68 rpm-ostree[2350]: In idle state; will auto-exit in 60 seconds Feb 23 15:43:15 ip-10-0-136-68 root[2473]: machine-config-daemon[2269]: Validated on-disk state Feb 23 15:43:15 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:15.371498 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:15 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:15.371606 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 15:43:15 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:15.371622 2125 projected.go:290] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 15:43:15 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:15.371632 2125 projected.go:196] Error preparing data for projected volume kube-api-access-7nhww for pod openshift-network-diagnostics/network-check-target-b2mxx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:15 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:15.371679 2125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww podName:5acce570-9f3b-4dab-9fed-169a4c110f8c nodeName:}" failed. No retries permitted until 2023-02-23 15:43:31.371668279 +0000 UTC m=+50.074197950 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7nhww" (UniqueName: "kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww") pod "network-check-target-b2mxx" (UID: "5acce570-9f3b-4dab-9fed-169a4c110f8c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 15:43:16 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:16.395105 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:16 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:16.395212 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.013302840Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=43e1dd95-9175-4362-8e1c-3d23effef8b6 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.014530108Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=850aee10-16c9-4721-a096-46746df3d464 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.022545682Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b8111819f25b8194478d55593ca125a634ee92d9d5e61866f09e80f1b59af18b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae],Size_:428240621,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=850aee10-16c9-4721-a096-46746df3d464 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.024010845Z" level=info msg="Creating container: openshift-image-registry/node-ca-wdtzq/node-ca" id=fc586bde-0744-4380-9bba-67dd60cc00f7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.024096900Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.047670942Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=f7e81be6-b54e-4b54-bc88-575eb6348692 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.048135532Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=4017478e-461b-4625-9519-84b70b3de4f7 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.048158047Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=3b76e290-8706-410f-9d33-871d8c86e4f9 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.048572366Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=71c2158b-aea6-465b-91f7-c77c9d086ed7 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.055588874Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72" id=5b1c8be8-e4b8-4679-a64f-23808f09d1b7 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.055955753Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72" id=0158a5b2-008e-488c-abee-db94013a1b28 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.056086197Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51a3c087e00a8d3916cceaab8f2064078ba13c2bdd41a167107c7318b2bff862,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72],Size_:480914545,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0158a5b2-008e-488c-abee-db94013a1b28 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.056822179Z" level=info msg="Creating container: openshift-dns/node-resolver-pgc9j/dns-node-resolver" id=38d6cc8a-ce82-467c-9de1-985fb218dc3a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.056944143Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.058550849Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4017478e-461b-4625-9519-84b70b3de4f7 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.058648414Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211" id=1582f711-d610-42a8-b585-3b2dcbc75e8e name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.058748110Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677" id=0bf8d9a2-1e77-4bb5-b845-7222a28a6d80 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.059757854Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3040fba25f1de00fc7180165bb6fe53ee7a27a50b0d5da5af3a7e0d26700e224,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94],Size_:487631698,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=71c2158b-aea6-465b-91f7-c77c9d086ed7 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.060668834Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f" id=a3f73bcd-842e-4cd7-a787-6e12aa570b51 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.061590935Z" level=info msg="Creating container: openshift-multus/multus-gr76d/kube-multus" id=f1c66fe5-e026-4cf8-9f54-45a071236b8b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.061746383Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.061879517Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677" id=3f063d79-f2ae-4116-8f92-f7b356b054d0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.061993587Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-d5wlc/oauth-proxy" id=3aa1566a-7b87-41e6-973b-40a78d3b7241 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.062100779Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.062170886Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211" id=e0abe26f-c96e-459e-b0c4-23c279725113 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.073570321Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b5b4a5c846650de70c23db1f0578a6656eada15483b87f39bace9bab24bf86dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211],Size_:433653219,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e0abe26f-c96e-459e-b0c4-23c279725113 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.074882150Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ba712ec683a435fa3ef8304fb00385fae95fbc045a82b8d2a9dc39ecd09e344,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677],Size_:438806970,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3f063d79-f2ae-4116-8f92-f7b356b054d0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.075707866Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=89589251-ef2c-44fe-bd9f-d7ecf59d41c5 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.075874157Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/egress-router-binary-copy" id=4cb70e97-d894-4b02-bdca-5ccf6067ca36 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.075957172Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.076006761Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f" id=6cd169ce-884f-4460-9513-0d0222efae0b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.076096447Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-driver" id=2bdab6ea-97c8-4a97-8e66-2ecea0c44cdf name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.076160727Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.076214902Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=28bceb4a-f20a-4645-837e-d494a8fac580 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.088895033Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=28bceb4a-f20a-4645-837e-d494a8fac580 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.092303894Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-controller" id=08182d49-3ce2-4a1b-8f87-577cfb0af666 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.092391382Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4.scope. Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.130445320Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:da914cc3ef13e76e0445e95dcaf766ba4641f9f983cbc16823ff667af167973f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f],Size_:602733635,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6cd169ce-884f-4460-9513-0d0222efae0b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.131963367Z" level=info msg="Creating container: openshift-cluster-node-tuning-operator/tuned-bjpgx/tuned" id=a1d16873-1cda-4d13-8ce0-e36dbeecae53 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.132038842Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d.scope. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container 532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4.scope. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container 7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container 2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4. Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.217432932Z" level=info msg="Created container 532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4: openshift-machine-config-operator/machine-config-daemon-d5wlc/oauth-proxy" id=3aa1566a-7b87-41e6-973b-40a78d3b7241 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.217433827Z" level=info msg="Created container 7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-controller" id=08182d49-3ce2-4a1b-8f87-577cfb0af666 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.218046074Z" level=info msg="Starting container: 532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4" id=7d87c08e-17aa-42aa-847a-23a4333d3c7f name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.218088496Z" level=info msg="Starting container: 7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d" id=43f360da-d42a-4af5-89b3-b0bf58590eb2 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.218374088Z" level=info msg="Created container 2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4: openshift-cluster-node-tuning-operator/tuned-bjpgx/tuned" id=a1d16873-1cda-4d13-8ce0-e36dbeecae53 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.218665579Z" level=info msg="Starting container: 2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4" id=841ce6e3-8d08-4982-a75d-69852c0fa0bf name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.223803543Z" level=info msg="Started container" PID=2511 containerID=532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4 description=openshift-machine-config-operator/machine-config-daemon-d5wlc/oauth-proxy id=7d87c08e-17aa-42aa-847a-23a4333d3c7f name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8 Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.230215265Z" level=info msg="Started container" PID=2542 containerID=2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4 description=openshift-cluster-node-tuning-operator/tuned-bjpgx/tuned id=841ce6e3-8d08-4982-a75d-69852c0fa0bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.231408592Z" level=info msg="Started container" PID=2534 containerID=7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-controller id=43f360da-d42a-4af5-89b3-b0bf58590eb2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32 Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.240645340Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=620e9b2a-8c51-4cd1-89f0-23d84a7c61ed name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.242127864Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=620e9b2a-8c51-4cd1-89f0-23d84a7c61ed name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.244147910Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=9df89bfc-0930-40c1-b226-25d94f2ad779 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.246184541Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9df89bfc-0930-40c1-b226-25d94f2ad779 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.247447063Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-acl-logging" id=a67350ca-161c-4336-bc01-e264aead4519 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.248265307Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654.scope. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654. Feb 23 15:43:17 ip-10-0-136-68 NetworkManager[1149]: [1677166997.3205] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/7) Feb 23 15:43:17 ip-10-0-136-68 NetworkManager[1149]: [1677166997.3213] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8) Feb 23 15:43:17 ip-10-0-136-68 NetworkManager[1149]: [1677166997.3216] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/9) Feb 23 15:43:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00070|bridge|INFO|bridge br-int: added interface br-int on port 65534 Feb 23 15:43:17 ip-10-0-136-68 kernel: device br-int entered promiscuous mode Feb 23 15:43:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00071|bridge|INFO|bridge br-int: using datapath ID 00001e70f2fd6495 Feb 23 15:43:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00072|connmgr|INFO|br-int: added service controller "punix:/var/run/openvswitch/br-int.mgmt" Feb 23 15:43:17 ip-10-0-136-68 systemd-udevd[2646]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:43:17 ip-10-0-136-68 systemd-udevd[2646]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.342862699Z" level=info msg="Created container b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-acl-logging" id=a67350ca-161c-4336-bc01-e264aead4519 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.343227995Z" level=info msg="Starting container: b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654" id=68b44506-5f07-4933-bfed-d5dcfaf4878f name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.349039715Z" level=info msg="Started container" PID=2635 containerID=b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654 description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-acl-logging id=68b44506-5f07-4933-bfed-d5dcfaf4878f name=/runtime.v1.RuntimeService/StartContainer sandboxID=324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32 Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.358846404Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=25b789ee-68c5-47c1-8760-1f499939e15f name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.359030024Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3 not found" id=25b789ee-68c5-47c1-8760-1f499939e15f name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.359784759Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=1dbae23b-3079-4eaf-a8cb-6aa230c046c2 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.361119972Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3\"" Feb 23 15:43:17 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:17.489001 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d.scope. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593.scope. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583.scope. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2.scope. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d.scope. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container 29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container 4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container 901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: Started libcontainer container 0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d. Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.635874983Z" level=info msg="Created container 29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d: openshift-multus/multus-gr76d/kube-multus" id=f1c66fe5-e026-4cf8-9f54-45a071236b8b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.636377655Z" level=info msg="Starting container: 29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d" id=ce40828f-a822-4251-9813-d5ff62e916be name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.639577115Z" level=info msg="Created container dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2: openshift-dns/node-resolver-pgc9j/dns-node-resolver" id=38d6cc8a-ce82-467c-9de1-985fb218dc3a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.639878773Z" level=info msg="Starting container: dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2" id=c71f9824-fdcb-46cf-bd8e-6c113ea5d096 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.641097845Z" level=info msg="Created container 4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593: openshift-image-registry/node-ca-wdtzq/node-ca" id=fc586bde-0744-4380-9bba-67dd60cc00f7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.641388073Z" level=info msg="Starting container: 4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593" id=d1fcdecf-53ce-4b2c-8046-22535af9c988 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.644336794Z" level=info msg="Started container" PID=2737 containerID=29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d description=openshift-multus/multus-gr76d/kube-multus id=ce40828f-a822-4251-9813-d5ff62e916be name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523 Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.646067896Z" level=info msg="Created container 0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-driver" id=2bdab6ea-97c8-4a97-8e66-2ecea0c44cdf name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.646787596Z" level=info msg="Starting container: 0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d" id=315a62c7-584a-49b7-9403-630b979843ed name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.652245767Z" level=info msg="Started container" PID=2751 containerID=dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2 description=openshift-dns/node-resolver-pgc9j/dns-node-resolver id=c71f9824-fdcb-46cf-bd8e-6c113ea5d096 name=/runtime.v1.RuntimeService/StartContainer sandboxID=47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.661506091Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_300d5b01-d329-434b-a7c0-66054e1db8a6\"" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.661805136Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.662967186Z" level=info msg="Started container" PID=2769 containerID=0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-driver id=315a62c7-584a-49b7-9403-630b979843ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.664338754Z" level=info msg="Started container" PID=2744 containerID=4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593 description=openshift-image-registry/node-ca-wdtzq/node-ca id=d1fcdecf-53ce-4b2c-8046-22535af9c988 name=/runtime.v1.RuntimeService/StartContainer sandboxID=01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.666036452Z" level=info msg="Created container 901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583: openshift-multus/multus-additional-cni-plugins-p9nj2/egress-router-binary-copy" id=4cb70e97-d894-4b02-bdca-5ccf6067ca36 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.668348651Z" level=info msg="Starting container: 901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583" id=956de970-d521-4b3e-87da-ce3e17a3d2d3 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.677775534Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629" id=2b2768f4-e76b-4fe3-83ca-00b91a78001c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.677984520Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629 not found" id=2b2768f4-e76b-4fe3-83ca-00b91a78001c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.678602014Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629" id=dcd12615-6309-4595-afeb-a719d1dcee02 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.680059345Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629\"" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.689190878Z" level=info msg="Started container" PID=2762 containerID=901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583 description=openshift-multus/multus-additional-cni-plugins-p9nj2/egress-router-binary-copy id=956de970-d521-4b3e-87da-ce3e17a3d2d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.690413800Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_208d8474-95a2-47f0-84f3-5316acf8f1a6\"" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.690509763Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.694513564Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.694543978Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.695888301Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_300d5b01-d329-434b-a7c0-66054e1db8a6\"" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.705489028Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/egress-router\"" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.705518970Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.707597024Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_208d8474-95a2-47f0-84f3-5316acf8f1a6\"" Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: crio-901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583.scope: Succeeded. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: crio-901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583.scope: Consumed 38ms CPU time Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: crio-conmon-901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583.scope: Succeeded. Feb 23 15:43:17 ip-10-0-136-68 systemd[1]: crio-conmon-901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583.scope: Consumed 18ms CPU time Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.765081782Z" level=info msg="CNI monitoring event CREATE \"/etc/kubernetes/cni/net.d/multus.d\"" Feb 23 15:43:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:17.765137385Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.035001 2125 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583" exitCode=0 Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.035057 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583} Feb 23 15:43:18 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:18.035446332Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63" id=b33ae3fb-bb26-4572-a220-4bea638ba4af name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:18 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:18.035634856Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63 not found" id=b33ae3fb-bb26-4572-a220-4bea638ba4af name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:18 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:18.036061569Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63" id=6b0fa843-360c-41aa-a4eb-ccc41963b27d name=/runtime.v1.ImageService/PullImage Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.036620 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" event=&{ID:07267a40-e316-4a88-91a5-11bc06672f23 Type:ContainerStarted Data:2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4} Feb 23 15:43:18 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:18.037121274Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63\"" Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.037961 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wdtzq" event=&{ID:ecd261a9-4d88-4e3d-aa47-803a685b6569 Type:ContainerStarted Data:4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593} Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.039236 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654} Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.039256 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d} Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.040103 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerStarted Data:532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4} Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.040782 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pgc9j" event=&{ID:507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 Type:ContainerStarted Data:dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2} Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.041920 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gr76d" event=&{ID:ffd2cee3-1bae-4941-8015-2b3ade383d85 Type:ContainerStarted Data:29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d} Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:18.042508 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerStarted Data:0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d} Feb 23 15:43:18 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:18.241625224Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3\"" Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:18.394630 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:18 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:18.394774 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:18 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:18.507807461Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629\"" Feb 23 15:43:18 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:18.921731694Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63\"" Feb 23 15:43:20 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:20.394664 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:20 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:20.394693 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.513507755Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=1dbae23b-3079-4eaf-a8cb-6aa230c046c2 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.514354193Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=0b5aeb55-ddfc-4af5-a73f-50c2e41d5562 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.515921651Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0b5aeb55-ddfc-4af5-a73f-50c2e41d5562 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.516829703Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy" id=3a836b70-b9a3-4915-8539-49d981fb6969 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.517001393Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:20 ip-10-0-136-68 systemd[1]: Started crio-conmon-c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b.scope. Feb 23 15:43:20 ip-10-0-136-68 systemd[1]: Started libcontainer container c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b. Feb 23 15:43:20 ip-10-0-136-68 systemd[1]: run-runc-c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b-runc.zo0zbS.mount: Succeeded. Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.858341239Z" level=info msg="Created container c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy" id=3a836b70-b9a3-4915-8539-49d981fb6969 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.858867436Z" level=info msg="Starting container: c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b" id=2b857273-ef68-4f39-af73-a856f1a6ffa2 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.866855341Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629" id=dcd12615-6309-4595-afeb-a719d1dcee02 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.869152027Z" level=info msg="Started container" PID=3079 containerID=c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy id=2b857273-ef68-4f39-af73-a856f1a6ffa2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32 Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.870346372Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629" id=ef1aea90-a36e-4a60-a219-e320179938c9 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.874339564Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2eecc69f1e9928cfda977963566305773afcd02e4e8704a5b84734739604a8ea,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629],Size_:366234876,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ef1aea90-a36e-4a60-a219-e320179938c9 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.875194533Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-node-driver-registrar" id=31789408-4257-4e63-ae0a-f58740e6a0dc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.875340144Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.928404365Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=888d70ac-08b1-4704-933d-94abbca76ae1 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.938267343Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=888d70ac-08b1-4704-933d-94abbca76ae1 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.939143097Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=d5773b39-738f-4140-bc78-a3659e196751 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.940576221Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d5773b39-738f-4140-bc78-a3659e196751 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.941373245Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy-ovn-metrics" id=4935137b-b4f2-4ed1-93c0-422f8013fa0c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:20.941491359Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:21 ip-10-0-136-68 systemd[1]: Started crio-conmon-6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f.scope. Feb 23 15:43:21 ip-10-0-136-68 systemd[1]: Started crio-conmon-35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3.scope. Feb 23 15:43:21 ip-10-0-136-68 systemd[1]: Started libcontainer container 6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f. Feb 23 15:43:21 ip-10-0-136-68 systemd[1]: Started libcontainer container 35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3. Feb 23 15:43:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:21.047591 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b} Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.273326986Z" level=info msg="Created container 35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-node-driver-registrar" id=31789408-4257-4e63-ae0a-f58740e6a0dc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.273328818Z" level=info msg="Created container 6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy-ovn-metrics" id=4935137b-b4f2-4ed1-93c0-422f8013fa0c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.273862319Z" level=info msg="Starting container: 35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3" id=975a3f99-4360-4d33-9fee-567c4224d6d5 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.274062483Z" level=info msg="Starting container: 6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f" id=6bb524df-2370-4759-84f5-2a0d32acad41 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.285206995Z" level=info msg="Started container" PID=3142 containerID=6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy-ovn-metrics id=6bb524df-2370-4759-84f5-2a0d32acad41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32 Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.286736206Z" level=info msg="Started container" PID=3143 containerID=35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-node-driver-registrar id=975a3f99-4360-4d33-9fee-567c4224d6d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae Feb 23 15:43:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:21.300854 2125 plugin_watcher.go:203] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.335996409Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=f17ad815-226f-40ef-abaf-14463d9d188b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.338154425Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f17ad815-226f-40ef-abaf-14463d9d188b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.339158745Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=5e821e77-7483-4020-83f8-4007cf28bcfe name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.341160965Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5e821e77-7483-4020-83f8-4007cf28bcfe name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.342548945Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovnkube-node" id=7002e626-e810-4e75-b6cd-7fc21ef04e89 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.342648135Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.357793053Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d" id=55f8f3fd-80e8-464d-b046-cbe6add58777 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:21.490095 2125 reconciler.go:164] "OperationExecutor.RegisterPlugin started" plugin={SocketPath:/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock Timestamp:2023-02-23 15:43:21.300879472 +0000 UTC m=+40.003409143 Handler: Name:} Feb 23 15:43:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:21.492806 2125 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Feb 23 15:43:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:21.492846 2125 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.516032947Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d not found" id=55f8f3fd-80e8-464d-b046-cbe6add58777 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.517235660Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d" id=83cf2b4d-5447-491e-a21d-1b93cbd5c56c name=/runtime.v1.ImageService/PullImage Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.519235434Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d\"" Feb 23 15:43:21 ip-10-0-136-68 systemd[1]: Started crio-conmon-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3.scope. Feb 23 15:43:21 ip-10-0-136-68 systemd[1]: Started libcontainer container 434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3. Feb 23 15:43:21 ip-10-0-136-68 systemd[1]: run-runc-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3-runc.fLP76N.mount: Succeeded. Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.808744943Z" level=info msg="Created container 434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovnkube-node" id=7002e626-e810-4e75-b6cd-7fc21ef04e89 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.809471230Z" level=info msg="Starting container: 434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3" id=2ae832c1-7494-47c4-b525-34f292f7cecd name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818373391Z" level=info msg="Started container" PID=3226 containerID=434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3 description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovnkube-node id=2ae832c1-7494-47c4-b525-34f292f7cecd name=/runtime.v1.RuntimeService/StartContainer sandboxID=324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32 Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818598225Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818666390Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818705497Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818725416Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818829091Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818910467Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818927784Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818946228Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818961528Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.818978810Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819060865Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819124815Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819172788Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819238266Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819316419Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819378945Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819506355Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819541791Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819556603Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819573173Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819587614Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819607508Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819688619Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819745994Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819789920Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819839569Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819885069Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819935916Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.819987622Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820040053Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820089009Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820142796Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820186215Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820233888Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820278799Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820355898Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820408941Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820459048Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820572866Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820645114Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820696834Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820752777Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820773450Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820789917Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820817063Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820835560Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820876995Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820896433Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820961565Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.820988838Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821018905Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821037496Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821082100Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821105653Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821160328Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821183360Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821243692Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821267777Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821328624Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821352972Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821383598Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821402267Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821458236Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821481983Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821535423Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821561277Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821616924Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821641588Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821715406Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821740496Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821791145Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821825703Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821857229Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821879922Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821921701Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821944092Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.821983282Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822005437Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822058334Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822082250Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822136997Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822159804Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822208560Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822230539Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822315547Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822353230Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822413983Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822441950Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822488633Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822514252Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822564526Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822590814Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822660409Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822721295Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822766516Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822822595Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822874235Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.822924493Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823339775Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823452771Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823503600Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823561600Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823610794Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823665908Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823715760Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823770848Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823810538Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823867812Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823915501Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.823961902Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824016856Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824073811Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824115942Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824165596Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824204356Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824259200Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824322330Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824382380Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824423333Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824478280Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824517719Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824565243Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824611747Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824659848Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824708557Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824761210Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824811450Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824859685Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824904626Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824958215Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.824999755Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825049574Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825099669Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825152815Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825192199Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825246078Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825312304Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825372302Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825421079Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825474225Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825519451Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825568947Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825614351Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825717657Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825767736Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825818845Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825858307Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825908519Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.825952499Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826001361Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826047730Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826102028Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826147171Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826199920Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826245966Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826315053Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826368243Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826418499Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826459355Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826506600Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826556498Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826605249Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826626961Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826645311Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826733454Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826944229Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826965839Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.826986129Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827000800Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827016405Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827092795Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827116825Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827182098Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827239481Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827309762Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827366621Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827409551Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827449238Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827469397Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827486628Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827514030Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827545403Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827625469Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827712924Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827732383Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827752767Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827769648Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827786385Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827834053Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827858156Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827894140Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827916783Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827958849Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.827978618Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828020374Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828040535Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828101182Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828123114Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828138633Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828157176Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828212777Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828235397Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828300937Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828324298Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828401746Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828461982Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828485631Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828505127Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828520430Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828549008Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828563644Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828583844Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828598160Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828617339Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828669184Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828693127Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828721311Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828740124Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828796804Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828819278Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828894673Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828913992Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828928023Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828945173Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.828983498Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829001745Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829046642Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829067267Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829116074Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829135472Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829316008Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829342714Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829425702Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829444764Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829456676Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829472270Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829501611Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829519261Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829587768Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829608306Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829621648Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829638595Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829692139Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829710878Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829758350Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829776996Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829821801Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829839986Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829885668Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829914031Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829956300Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.829982381Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830038054Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830085280Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830130461Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830177586Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830223070Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830269569Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830331750Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830385235Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830429193Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830468833Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830481943Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830497312Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830509838Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830526536Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830593989Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830617747Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830668125Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830709028Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830723651Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830741148Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830753769Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830771276Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830838211Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830860403Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830908628Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830930726Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.830981963Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831019547Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831033830Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831050624Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831063722Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831080952Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831140660Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831162803Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831211373Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831233463Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831271463Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831335183Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831352062Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831375745Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831434032Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831456228Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831506416Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831547187Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831561130Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831578957Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831591689Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831608845Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831670106Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831719602Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831816015Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831856217Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831870594Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831888922Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831901734Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.831918935Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832000383Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832023497Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832068620Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832091240Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832134542Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832158852Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832197072Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832219765Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832260212Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832301249Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832342387Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832365144Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832407317Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832430471Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832509839Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832534925Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832548532Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832567965Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832580911Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832598055Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832645948Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832669564Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832705368Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832727164Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832766534Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832788404Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832829191Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832851323Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832893799Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832917152Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832951151Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.832973820Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833013873Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833036341Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833069998Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833091922Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833141862Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833163352Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833205869Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833228979Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833268812Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833314692Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833354461Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833378697Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833418777Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833442640Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833473856Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833497125Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833538968Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833561048Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833602307Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833624084Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833676038Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833698599Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833730823Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833752217Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833795436Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833817739Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833858151Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833879864Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833930472Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.833952737Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834000322Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834022212Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834073574Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834095980Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834144005Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834166374Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834217618Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834239878Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834306864Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834330790Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834377420Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834402440Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834447201Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834469157Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834519716Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834542193Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834590082Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834613135Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834659445Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834678792Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834725828Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834773857Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834792053Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834810392Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834854887Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834877055Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834923951Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834947671Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.834985146Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835007479Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835077905Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835101374Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835145761Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835170128Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835208549Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835230338Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835269397Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835310588Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835354192Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835376627Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835417034Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835439778Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835478923Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835500659Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835544204Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835566672Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835607463Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835629395Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835675423Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835698177Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835737736Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835759970Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835802865Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835825027Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835867018Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835889283Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835928578Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835950556Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.835991940Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836014266Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836056387Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836078508Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836120409Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836144314Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836183991Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836206553Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836247548Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836269811Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836326732Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836349329Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836391839Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836413860Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836454539Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836476803Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836520559Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836542773Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836621922Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836645242Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836658654Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836675154Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836707678Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836725608Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836770788Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836793672Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836832784Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836855285Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836896551Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836920636Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836956849Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.836980318Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837022945Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837055011Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837081155Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837103277Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837146786Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837171357Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837210890Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837233405Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837275380Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837319818Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837362222Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837385024Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837428333Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837451506Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837499218Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837521319Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837571351Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837594879Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837637508Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837670348Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837697488Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837716299Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837752261Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837775754Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837816631Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837838582Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837918178Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837940267Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837953361Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.837970891Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838000350Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838018177Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838065163Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838086703Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838127189Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838150015Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838189945Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838214442Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838254976Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838277632Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838374199Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838398066Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838411608Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838429305Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838462974Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838485955Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838526944Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838551674Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838590036Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838613452Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838654025Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838676373Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838755712Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838778290Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838791579Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838808339Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838833631Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838851405Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838898563Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838920433Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838964248Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.838986469Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839027077Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839049222Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839089319Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839112138Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839192537Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839215447Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839228694Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839247102Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839272100Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839332069Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839350022Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839365270Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839414984Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839429025Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839481618Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839499967Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839545796Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839563495Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839608463Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839626619Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839668650Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839687096Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839732767Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839756455Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839794799Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839818539Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839859087Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839881559Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839922928Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839945020Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.839986752Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840009381Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840053026Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840074998Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840155177Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840178134Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840191172Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840207893Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840240373Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840260320Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840318142Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840341097Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840399136Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840449054Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840489868Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840538293Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840574841Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840623811Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840663671Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840707279Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840747392Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840791962Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840834106Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840878247Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840921634Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.840972046Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841013097Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841057373Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841097823Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841140772Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841183328Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841229701Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841270257Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841334722Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841382502Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841427257Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841468330Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841513926Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841560793Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841605271Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841645664Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841700311Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841736585Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841780213Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841818914Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841865018Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841904355Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841947391Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.841989283Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842034996Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842077068Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842121971Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842157404Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842177969Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842227302Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842267481Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842306633Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842329153Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842398471Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842421796Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842435616Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842640219Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842663181Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842681650Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842695359Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842725965Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842770960Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842792235Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842835938Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842856454Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842900756Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842921988Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.842961060Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843000366Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843028938Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843047141Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843094474Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843115289Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843157191Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843177377Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843220374Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843241345Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843296911Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843320790Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843368813Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843390209Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843428509Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843465592Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843489028Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843506891Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843561286Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843582294Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843623625Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843645310Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843682469Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843702395Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843748776Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843771177Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843814213Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843835467Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843877882Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843898620Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843943000Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.843963044Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844006776Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844027706Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844070999Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844092265Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844135786Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844156006Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844211860Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844232304Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844265443Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844301039Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844351751Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844372431Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844420598Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844440491Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844489511Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844510527Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844541962Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844563531Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844604563Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844626919Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844672896Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844694187Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844736309Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844756755Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844785508Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 15:43:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:21.844806665Z" level=info msg="Updated default CNI network name to " Feb 23 15:43:22 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:22.049985 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerStarted Data:35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3} Feb 23 15:43:22 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:22.051651 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3} Feb 23 15:43:22 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:22.051674 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f} Feb 23 15:43:22 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:22.051797 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:43:22 ip-10-0-136-68 ovsdb-server[1062]: ovs|00029|stream_ssl|ERR|SSL_use_certificate_file: error:02001002:system library:fopen:No such file or directory Feb 23 15:43:22 ip-10-0-136-68 ovsdb-server[1062]: ovs|00030|stream_ssl|ERR|SSL_use_PrivateKey_file: error:20074002:BIO routines:file_ctrl:system lib Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00073|stream_ssl|ERR|SSL_use_certificate_file: error:02001002:system library:fopen:No such file or directory Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00074|stream_ssl|ERR|SSL_use_PrivateKey_file: error:20074002:BIO routines:file_ctrl:system lib Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00075|stream_ssl|ERR|failed to load client certificates from /ovn-ca/ca-bundle.crt: error:140AD002:SSL routines:SSL_CTX_use_certificate_file:system lib Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2170] manager: (ovn-b823f7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10) Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2177] manager: (ovn-061a07-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11) Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2181] manager: (ovn-5a9c4f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12) Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2185] manager: (ovn-7dfb31-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13) Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2245] manager: (ovn-k8s-mp0): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14) Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2249] manager: (ovn-k8s-mp0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15) Feb 23 15:43:22 ip-10-0-136-68 kernel: device genev_sys_6081 entered promiscuous mode Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00076|bridge|INFO|bridge br-int: added interface ovn-7dfb31-0 on port 1 Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00077|bridge|INFO|bridge br-int: added interface ovn-5a9c4f-0 on port 2 Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00078|bridge|INFO|bridge br-int: added interface ovn-061a07-0 on port 3 Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00079|bridge|INFO|bridge br-int: added interface ovn-b823f7-0 on port 4 Feb 23 15:43:22 ip-10-0-136-68 systemd-udevd[3293]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:43:22 ip-10-0-136-68 systemd-udevd[3293]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:22 ip-10-0-136-68 systemd-udevd[3293]: Could not generate persistent MAC address for genev_sys_6081: No such file or directory Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2335] device (genev_sys_6081): carrier: link connected Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2338] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/16) Feb 23 15:43:22 ip-10-0-136-68 kernel: device ovn-k8s-mp0 entered promiscuous mode Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00080|netdev|WARN|failed to set MTU for network device ovn-k8s-mp0: No such device Feb 23 15:43:22 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00081|bridge|INFO|bridge br-int: added interface ovn-k8s-mp0 on port 5 Feb 23 15:43:22 ip-10-0-136-68 systemd-udevd[3297]: Using default interface naming scheme 'rhel-8.0'. Feb 23 15:43:22 ip-10-0-136-68 systemd-udevd[3297]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:22 ip-10-0-136-68 systemd-udevd[3297]: Could not generate persistent MAC address for ovn-k8s-mp0: No such file or directory Feb 23 15:43:22 ip-10-0-136-68 NetworkManager[1149]: [1677167002.2640] device (ovn-k8s-mp0): carrier: link connected Feb 23 15:43:22 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:22.362041008Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d\"" Feb 23 15:43:22 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:22.398737 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:22 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:22.398833 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:22 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:22.490834 2125 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 15:43:23 ip-10-0-136-68 NetworkManager[1149]: [1677167003.0080] manager: (patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/17) Feb 23 15:43:23 ip-10-0-136-68 NetworkManager[1149]: [1677167003.0084] manager: (patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/18) Feb 23 15:43:23 ip-10-0-136-68 NetworkManager[1149]: [1677167003.0088] manager: (patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19) Feb 23 15:43:23 ip-10-0-136-68 NetworkManager[1149]: [1677167003.0092] manager: (patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20) Feb 23 15:43:23 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00082|bridge|INFO|bridge br-ex: added interface patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int on port 2 Feb 23 15:43:23 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00083|bridge|INFO|bridge br-int: added interface patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal on port 6 Feb 23 15:43:23 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00084|connmgr|INFO|br-ex<->unix#11: 28 flow_mods in the last 0 s (28 adds) Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.820003160Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63" id=6b0fa843-360c-41aa-a4eb-ccc41963b27d name=/runtime.v1.ImageService/PullImage Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.821357086Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63" id=5406f878-1eeb-40a3-8c0e-a42a95b8bbc8 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.822243219Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:329f0052933f8d4a512b68f715fe001d1d60ee1ef6897dd333ea86e4fd331fc7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63],Size_:574266870,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5406f878-1eeb-40a3-8c0e-a42a95b8bbc8 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.822839983Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/cni-plugins" id=175b5ee6-a3ea-4242-b0f4-d194589bfb40 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.822913087Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:23 ip-10-0-136-68 systemd[1]: Started crio-conmon-95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2.scope. Feb 23 15:43:23 ip-10-0-136-68 systemd[1]: Started libcontainer container 95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2. Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.953606418Z" level=info msg="CNI monitoring event CREATE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.963581043Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.963602222Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.963622963Z" level=info msg="CNI monitoring event WRITE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.970602124Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.970620678Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.970630706Z" level=info msg="CNI monitoring event CHMOD \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.970643206Z" level=info msg="CNI monitoring event CHMOD \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.978400162Z" level=info msg="Created container 95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2: openshift-multus/multus-additional-cni-plugins-p9nj2/cni-plugins" id=175b5ee6-a3ea-4242-b0f4-d194589bfb40 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.978780162Z" level=info msg="Starting container: 95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2" id=bbceac0c-8597-4b57-ad0a-68b192940047 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.984334026Z" level=info msg="Started container" PID=3579 containerID=95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2 description=openshift-multus/multus-additional-cni-plugins-p9nj2/cni-plugins id=bbceac0c-8597-4b57-ad0a-68b192940047 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.988212942Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_3b0664be-0500-4a32-ad45-3bcb30c98692\"" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.997333045Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:23 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:23.997352153Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.018540747Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bandwidth\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.026634620Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.026659208Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.026673329Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bridge\"" Feb 23 15:43:24 ip-10-0-136-68 systemd[1]: crio-95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2.scope: Succeeded. Feb 23 15:43:24 ip-10-0-136-68 systemd[1]: crio-95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2.scope: Consumed 51ms CPU time Feb 23 15:43:24 ip-10-0-136-68 systemd[1]: crio-conmon-95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2.scope: Succeeded. Feb 23 15:43:24 ip-10-0-136-68 systemd[1]: crio-conmon-95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2.scope: Consumed 19ms CPU time Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.040566410Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.040585563Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.040595770Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/dhcp\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.048276223Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.048321272Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.048335552Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/firewall\"" Feb 23 15:43:24 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:24.055695 2125 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2" exitCode=0 Feb 23 15:43:24 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:24.055744 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2} Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.056386835Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9" id=0ff640c0-ea34-4e62-9c94-3c05873e9363 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.056565988Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9 not found" id=0ff640c0-ea34-4e62-9c94-3c05873e9363 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.056966890Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.056986637Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.057001433Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/host-device\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.057018726Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9" id=3321cfd0-86c5-4245-8347-1a1034538e70 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.058193033Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.063226541Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.063242330Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.063250710Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/host-local\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.068779823Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.068796992Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.068805096Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ipvlan\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.074883804Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.074904289Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.074916099Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/loopback\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.080743043Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.080761110Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.080769925Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/macvlan\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.086692151Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.086709706Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.086718253Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/portmap\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.092892955Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.092910247Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.092918884Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ptp\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.099520641Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.099536845Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.099546011Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/sbr\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.106653337Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.106673545Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.106685565Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/static\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.115640408Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.115658284Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.115667357Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/tuning\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.122870210Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.122887485Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.122895414Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/vlan\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.131600111Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.131617143Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.131625870Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/vrf\"" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.140462312Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.140486011Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.140499699Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_3b0664be-0500-4a32-ad45-3bcb30c98692\"" Feb 23 15:43:24 ip-10-0-136-68 NetworkManager[1149]: [1677167004.2668] manager: (ovn-72cfee-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21) Feb 23 15:43:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00085|bridge|INFO|bridge br-int: added interface ovn-72cfee-0 on port 7 Feb 23 15:43:24 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:24.395265 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:24 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:24.395566 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.432645829Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d" id=83cf2b4d-5447-491e-a21d-1b93cbd5c56c name=/runtime.v1.ImageService/PullImage Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.433228705Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d" id=c970375e-6859-41a5-8099-98db60d8a5c0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.434121797Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40bae28f97f8f229b5a02594c733e50dcbce35d0113ede4c94c66a0320c493a8,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d],Size_:364222717,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c970375e-6859-41a5-8099-98db60d8a5c0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.434617309Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-liveness-probe" id=5055e867-e594-4b61-b643-56aff48042c5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.434702116Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:24 ip-10-0-136-68 systemd[1]: Started crio-conmon-9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854.scope. Feb 23 15:43:24 ip-10-0-136-68 systemd[1]: Started libcontainer container 9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854. Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.565692288Z" level=info msg="Created container 9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-liveness-probe" id=5055e867-e594-4b61-b643-56aff48042c5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.566037042Z" level=info msg="Starting container: 9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854" id=0810a673-3c16-4667-8f76-4feda0f281c8 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.570610561Z" level=info msg="Started container" PID=3791 containerID=9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-liveness-probe id=0810a673-3c16-4667-8f76-4feda0f281c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae Feb 23 15:43:24 ip-10-0-136-68 systemd[1]: run-runc-9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854-runc.Lt5pob.mount: Succeeded. Feb 23 15:43:24 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:24.933555142Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9\"" Feb 23 15:43:25 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:25.058778 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerStarted Data:9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854} Feb 23 15:43:25 ip-10-0-136-68 root[3827]: machine-config-daemon[2269]: Update completed for config rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138 and node has been successfully uncordoned Feb 23 15:43:25 ip-10-0-136-68 logger[3828]: rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138 Feb 23 15:43:26 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:26.395294 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c Feb 23 15:43:26 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:43:26.395318 2125 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.545391991Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9" id=3321cfd0-86c5-4245-8347-1a1034538e70 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.546024742Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9" id=31569f0c-fc66-4135-b290-a8f341b9a671 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.546902193Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4863f207d59fce067b864451f5c7b0dca685f5a63af45f9e51cbee61b04172bd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9],Size_:352688251,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=31569f0c-fc66-4135-b290-a8f341b9a671 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.547476984Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/bond-cni-plugin" id=7ff94192-f799-498f-a648-8922b2d43dec name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.547569210Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:26 ip-10-0-136-68 systemd[1]: Started crio-conmon-a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0.scope. Feb 23 15:43:26 ip-10-0-136-68 systemd[1]: Started libcontainer container a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0. Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.693582830Z" level=info msg="Created container a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0: openshift-multus/multus-additional-cni-plugins-p9nj2/bond-cni-plugin" id=7ff94192-f799-498f-a648-8922b2d43dec name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.693976791Z" level=info msg="Starting container: a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0" id=05e88936-c502-4ca4-aa29-1764b4f36259 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.710913812Z" level=info msg="Started container" PID=3858 containerID=a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0 description=openshift-multus/multus-additional-cni-plugins-p9nj2/bond-cni-plugin id=05e88936-c502-4ca4-aa29-1764b4f36259 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.716135717Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_ca955565-2f0c-4a64-bec1-f0dbce44785f\"" Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.724515227Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.724535594Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.724548929Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bond\"" Feb 23 15:43:26 ip-10-0-136-68 systemd[1]: crio-a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0.scope: Succeeded. Feb 23 15:43:26 ip-10-0-136-68 systemd[1]: crio-a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0.scope: Consumed 26ms CPU time Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.734457206Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.734481891Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:26 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:26.734494308Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_ca955565-2f0c-4a64-bec1-f0dbce44785f\"" Feb 23 15:43:26 ip-10-0-136-68 systemd[1]: crio-conmon-a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0.scope: Succeeded. Feb 23 15:43:26 ip-10-0-136-68 systemd[1]: crio-conmon-a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0.scope: Consumed 20ms CPU time Feb 23 15:43:27 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:27.063260 2125 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0" exitCode=0 Feb 23 15:43:27 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:27.063448 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0} Feb 23 15:43:27 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:27.063780509Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903" id=d51db48b-e0e0-457b-878f-81a8ea14cced name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:27 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:27.064097672Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903 not found" id=d51db48b-e0e0-457b-878f-81a8ea14cced name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:27 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:27.064504203Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903" id=f272e63c-6869-4bd5-8299-96ca4976876e name=/runtime.v1.ImageService/PullImage Feb 23 15:43:27 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:27.065424315Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903\"" Feb 23 15:43:27 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:27.956845105Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903\"" Feb 23 15:43:28 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod5acce570_9f3b_4dab_9fed_169a4c110f8c.slice. Feb 23 15:43:28 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod9cd26ba5_46e4_40b5_81e6_74079153d58d.slice. Feb 23 15:43:29 ip-10-0-136-68 systemd[1]: run-runc-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3-runc.ziNxMf.mount: Succeeded. Feb 23 15:43:29 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:29.940921 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.501631916Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903" id=f272e63c-6869-4bd5-8299-96ca4976876e name=/runtime.v1.ImageService/PullImage Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.502718397Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903" id=a8ffb7d8-9160-4858-b530-5e69185f0fc9 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.503559019Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8e340e90f6e3a45f51b38ed888230331ab048c37137d84bb37e5141844371f76,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903],Size_:317193941,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a8ffb7d8-9160-4858-b530-5e69185f0fc9 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.504067779Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/routeoverride-cni" id=2f354f2a-fe0b-4969-8a4e-f13af4d7b3cb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.504149781Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:30 ip-10-0-136-68 systemd[1]: Started crio-conmon-b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac.scope. Feb 23 15:43:30 ip-10-0-136-68 systemd[1]: Started libcontainer container b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac. Feb 23 15:43:30 ip-10-0-136-68 systemd[1]: run-runc-b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac-runc.JhYinu.mount: Succeeded. Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.964531786Z" level=info msg="Created container b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac: openshift-multus/multus-additional-cni-plugins-p9nj2/routeoverride-cni" id=2f354f2a-fe0b-4969-8a4e-f13af4d7b3cb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.965010362Z" level=info msg="Starting container: b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac" id=cfd25b69-25df-4a48-a23f-caa6086a5020 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.970582388Z" level=info msg="Started container" PID=3989 containerID=b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac description=openshift-multus/multus-additional-cni-plugins-p9nj2/routeoverride-cni id=cfd25b69-25df-4a48-a23f-caa6086a5020 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.975711601Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_598cdfae-fd82-4a31-ac19-aea6be018621\"" Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.983477757Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.983496547Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.983506959Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/route-override\"" Feb 23 15:43:30 ip-10-0-136-68 systemd[1]: crio-b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac.scope: Succeeded. Feb 23 15:43:30 ip-10-0-136-68 systemd[1]: crio-b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac.scope: Consumed 26ms CPU time Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.993024297Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.993045491Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:30.993061773Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_598cdfae-fd82-4a31-ac19-aea6be018621\"" Feb 23 15:43:30 ip-10-0-136-68 systemd[1]: crio-conmon-b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac.scope: Succeeded. Feb 23 15:43:30 ip-10-0-136-68 systemd[1]: crio-conmon-b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac.scope: Consumed 21ms CPU time Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.070421 2125 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac" exitCode=0 Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.070463 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac} Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.070905971Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=38209774-5e11-4d4a-967c-48f8ec076418 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.071059546Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66 not found" id=38209774-5e11-4d4a-967c-48f8ec076418 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.071529618Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=fa694614-cbfc-40b8-99f7-0a360bf7e5ce name=/runtime.v1.ImageService/PullImage Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.072366629Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66\"" Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.082609 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.088368 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.110434 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.111046117Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-5hc5d/POD" id=b99b3206-e394-4eaf-bf65-747f692cde2f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.111097878Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.126466935Z" level=info msg="Got pod network &{Name:network-metrics-daemon-5hc5d Namespace:openshift-multus ID:f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3 UID:9cd26ba5-46e4-40b5-81e6-74079153d58d NetNS:/var/run/netns/bd432261-d919-463e-9ad8-453be2170666 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.126496266Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-5hc5d to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): f879576786b0889: link is not ready Feb 23 15:43:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 15:43:31 ip-10-0-136-68 NetworkManager[1149]: [1677167011.2536] manager: (f879576786b0889): new Veth device (/org/freedesktop/NetworkManager/Devices/22) Feb 23 15:43:31 ip-10-0-136-68 NetworkManager[1149]: [1677167011.2544] device (f879576786b0889): carrier: link connected Feb 23 15:43:31 ip-10-0-136-68 systemd-udevd[4080]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:31 ip-10-0-136-68 systemd-udevd[4080]: Could not generate persistent MAC address for f879576786b0889: No such file or directory Feb 23 15:43:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 15:43:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): f879576786b0889: link becomes ready Feb 23 15:43:31 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00086|bridge|INFO|bridge br-int: added interface f879576786b0889 on port 8 Feb 23 15:43:31 ip-10-0-136-68 NetworkManager[1149]: [1677167011.2728] manager: (f879576786b0889): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23) Feb 23 15:43:31 ip-10-0-136-68 kernel: device f879576786b0889 entered promiscuous mode Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.324941 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/network-metrics-daemon-5hc5d] Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: I0223 15:43:31.245400 4066 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:31Z [verbose] Add: openshift-multus:network-metrics-daemon-5hc5d:9cd26ba5-46e4-40b5-81e6-74079153d58d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f879576786b0889","mac":"fe:ac:41:90:74:4f"},{"name":"eth0","mac":"0a:58:0a:81:02:03","sandbox":"/var/run/netns/bd432261-d919-463e-9ad8-453be2170666"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.3/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: I0223 15:43:31.303941 4059 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-5hc5d", UID:"9cd26ba5-46e4-40b5-81e6-74079153d58d", APIVersion:"v1", ResourceVersion:"21324", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.3/23] from ovn-kubernetes Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.327681851Z" level=info msg="Got pod network &{Name:network-metrics-daemon-5hc5d Namespace:openshift-multus ID:f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3 UID:9cd26ba5-46e4-40b5-81e6-74079153d58d NetNS:/var/run/netns/bd432261-d919-463e-9ad8-453be2170666 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.327800691Z" level=info msg="Checking pod openshift-multus_network-metrics-daemon-5hc5d for CNI network multus-cni-network (type=multus)" Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:31.329530 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cd26ba5_46e4_40b5_81e6_74079153d58d.slice/crio-f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3.scope WatchSource:0}: Error finding container f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3: Status 404 returned error can't find the container with id f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3 Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.330691255Z" level=info msg="Ran pod sandbox f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3 with infra container: openshift-multus/network-metrics-daemon-5hc5d/POD" id=b99b3206-e394-4eaf-bf65-747f692cde2f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.331406511Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1" id=99d9b036-57a8-4641-a04e-dc7cbb46f5b1 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.331554587Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1 not found" id=99d9b036-57a8-4641-a04e-dc7cbb46f5b1 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.332045010Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1" id=c53070a5-7cde-4ccb-b641-306092ccecff name=/runtime.v1.ImageService/PullImage Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.332899318Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1\"" Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.385037 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.387026 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.405994 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.406373443Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-b2mxx/POD" id=206aca36-0a6d-4649-be8e-614c152bb5d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.406429840Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.422419394Z" level=info msg="Got pod network &{Name:network-check-target-b2mxx Namespace:openshift-network-diagnostics ID:0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c UID:5acce570-9f3b-4dab-9fed-169a4c110f8c NetNS:/var/run/netns/f36753b3-0496-4a07-9706-b1775a079ccf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.422595074Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-b2mxx to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 0c751590d84e3dc: link is not ready Feb 23 15:43:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 0c751590d84e3dc: link becomes ready Feb 23 15:43:31 ip-10-0-136-68 NetworkManager[1149]: [1677167011.5397] manager: (0c751590d84e3dc): new Veth device (/org/freedesktop/NetworkManager/Devices/24) Feb 23 15:43:31 ip-10-0-136-68 systemd-udevd[4114]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:31 ip-10-0-136-68 systemd-udevd[4114]: Could not generate persistent MAC address for 0c751590d84e3dc: No such file or directory Feb 23 15:43:31 ip-10-0-136-68 NetworkManager[1149]: [1677167011.5404] device (0c751590d84e3dc): carrier: link connected Feb 23 15:43:31 ip-10-0-136-68 kernel: device 0c751590d84e3dc entered promiscuous mode Feb 23 15:43:31 ip-10-0-136-68 NetworkManager[1149]: [1677167011.5582] manager: (0c751590d84e3dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25) Feb 23 15:43:31 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00087|bridge|INFO|bridge br-int: added interface 0c751590d84e3dc on port 9 Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:31.605556 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-network-diagnostics/network-check-target-b2mxx] Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: I0223 15:43:31.536812 4104 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:31Z [verbose] Add: openshift-network-diagnostics:network-check-target-b2mxx:5acce570-9f3b-4dab-9fed-169a4c110f8c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0c751590d84e3dc","mac":"92:d7:f4:25:95:30"},{"name":"eth0","mac":"0a:58:0a:81:02:04","sandbox":"/var/run/netns/f36753b3-0496-4a07-9706-b1775a079ccf"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.4/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: I0223 15:43:31.590427 4097 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-b2mxx", UID:"5acce570-9f3b-4dab-9fed-169a4c110f8c", APIVersion:"v1", ResourceVersion:"21349", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.4/23] from ovn-kubernetes Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.606902670Z" level=info msg="Got pod network &{Name:network-check-target-b2mxx Namespace:openshift-network-diagnostics ID:0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c UID:5acce570-9f3b-4dab-9fed-169a4c110f8c NetNS:/var/run/netns/f36753b3-0496-4a07-9706-b1775a079ccf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.607023629Z" level=info msg="Checking pod openshift-network-diagnostics_network-check-target-b2mxx for CNI network multus-cni-network (type=multus)" Feb 23 15:43:31 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:31.608446 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5acce570_9f3b_4dab_9fed_169a4c110f8c.slice/crio-0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c.scope WatchSource:0}: Error finding container 0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c: Status 404 returned error can't find the container with id 0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.609571086Z" level=info msg="Ran pod sandbox 0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c with infra container: openshift-network-diagnostics/network-check-target-b2mxx/POD" id=206aca36-0a6d-4649-be8e-614c152bb5d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.610217769Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60" id=6902800c-8ace-49ac-afc2-310a902ca19f name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.610374783Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60 not found" id=6902800c-8ace-49ac-afc2-310a902ca19f name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.610807808Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60" id=b306ef23-7d7e-44e9-a23e-e1e57539aff5 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.611611012Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60\"" Feb 23 15:43:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:31.956144247Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66\"" Feb 23 15:43:32 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:32.072838 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerStarted Data:f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3} Feb 23 15:43:32 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:32.073317 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-b2mxx" event=&{ID:5acce570-9f3b-4dab-9fed-169a4c110f8c Type:ContainerStarted Data:0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c} Feb 23 15:43:32 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:32.191393643Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1\"" Feb 23 15:43:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00088|connmgr|INFO|br-int<->unix#2: 2490 flow_mods in the 9 s starting 10 s ago (2337 adds, 145 deletes, 8 modifications) Feb 23 15:43:32 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:32.497043493Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60\"" Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.499254646Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1" id=c53070a5-7cde-4ccb-b641-306092ccecff name=/runtime.v1.ImageService/PullImage Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.500470808Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1" id=83c50975-2e62-4474-8b0b-d6cc8de33d8c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.501969487Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:cf970f9f630b6d1f93b0d1fe248cb85574ebbdcdf0eb41f96f3b817528af45c4,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1],Size_:385370431,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=83c50975-2e62-4474-8b0b-d6cc8de33d8c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.505845542Z" level=info msg="Creating container: openshift-multus/network-metrics-daemon-5hc5d/network-metrics-daemon" id=ceb062a7-7815-48b9-bf13-837aa7c745af name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.505941821Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:34 ip-10-0-136-68 systemd[1]: Started crio-conmon-cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372.scope. Feb 23 15:43:34 ip-10-0-136-68 systemd[1]: Started libcontainer container cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372. Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.763356300Z" level=info msg="Created container cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372: openshift-multus/network-metrics-daemon-5hc5d/network-metrics-daemon" id=ceb062a7-7815-48b9-bf13-837aa7c745af name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.773125671Z" level=info msg="Starting container: cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372" id=adf378a6-a2fb-4468-8c94-d2f160286f74 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.799134477Z" level=info msg="Started container" PID=4241 containerID=cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372 description=openshift-multus/network-metrics-daemon-5hc5d/network-metrics-daemon id=adf378a6-a2fb-4468-8c94-d2f160286f74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3 Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.821510421Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=bc2602b9-c1ab-46da-b3c9-22a3f4931fcf name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.821852795Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bc2602b9-c1ab-46da-b3c9-22a3f4931fcf name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.831350860Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=054515ee-407a-4b1b-b7eb-6742dd0ab897 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.831495014Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=054515ee-407a-4b1b-b7eb-6742dd0ab897 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.832093895Z" level=info msg="Creating container: openshift-multus/network-metrics-daemon-5hc5d/kube-rbac-proxy" id=bbcabc48-18c6-4a98-ab26-0d6d31919b56 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:34 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:34.832190781Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:35 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:35.082502 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerStarted Data:cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372} Feb 23 15:43:35 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:35.896134 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeReady" Feb 23 15:43:35 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:35.964167 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-dns/dns-default-h4ftg] Feb 23 15:43:35 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:35.964209 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:43:35 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:35.964602 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-ingress-canary/ingress-canary-p47qk] Feb 23 15:43:35 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:35.964630 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:43:35 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podc072a683_1031_40cb_a1bc_1dac71bca46b.slice. Feb 23 15:43:35 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:35.983661 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-dns/dns-default-h4ftg] Feb 23 15:43:35 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-poda704838c_aeb5_4709_b91c_2460423203a4.slice. Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.001595 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-ingress-canary/ingress-canary-p47qk] Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.128956 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfmxf\" (UniqueName: \"kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf\") pod \"ingress-canary-p47qk\" (UID: \"a704838c-aeb5-4709-b91c-2460423203a4\") " pod="openshift-ingress-canary/ingress-canary-p47qk" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.129001 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.129031 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2zwz\" (UniqueName: \"kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.129153 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.229399 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-w2zwz\" (UniqueName: \"kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.229451 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.229501 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-nfmxf\" (UniqueName: \"kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf\") pod \"ingress-canary-p47qk\" (UID: \"a704838c-aeb5-4709-b91c-2460423203a4\") " pod="openshift-ingress-canary/ingress-canary-p47qk" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.229533 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.230020 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.232862 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.250042 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2zwz\" (UniqueName: \"kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.254649 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfmxf\" (UniqueName: \"kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf\") pod \"ingress-canary-p47qk\" (UID: \"a704838c-aeb5-4709-b91c-2460423203a4\") " pod="openshift-ingress-canary/ingress-canary-p47qk" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.283635 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.284015431Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-h4ftg/POD" id=856bec92-aa6d-48c0-bf75-d5ac6152d28c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.284066196Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.293632 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p47qk" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.293934959Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-p47qk/POD" id=943aea56-c7ec-4f95-ad2c-0ce1b498ec26 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.293973946Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af.scope. Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.497508647Z" level=info msg="Got pod network &{Name:ingress-canary-p47qk Namespace:openshift-ingress-canary ID:35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51 UID:a704838c-aeb5-4709-b91c-2460423203a4 NetNS:/var/run/netns/66097094-74f3-4cd1-b8ec-0513bfaa3c62 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.497536951Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-p47qk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.507540767Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=fa694614-cbfc-40b8-99f7-0a360bf7e5ce name=/runtime.v1.ImageService/PullImage Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.507568333Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60" id=b306ef23-7d7e-44e9-a23e-e1e57539aff5 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.508582603Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=80e966b6-c53d-4e13-8055-bab984c0b903 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.509732830Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a051c2cfc108e960dd12d60bc4ee074be58ba53de890a6f33ab7bada80d30890,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66],Size_:476595411,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=80e966b6-c53d-4e13-8055-bab984c0b903 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.509811697Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60" id=07cf06bc-0956-4564-8227-4b10de019c2c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.509817275Z" level=info msg="Got pod network &{Name:dns-default-h4ftg Namespace:openshift-dns ID:ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689 UID:c072a683-1031-40cb-a1bc-1dac71bca46b NetNS:/var/run/netns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.510477480Z" level=info msg="Adding pod openshift-dns_dns-default-h4ftg to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.510836805Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni-bincopy" id=650c35fd-4e3d-4ccb-bb28-6faf47f35c93 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.510934672Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.510847841Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fbfabc25c264657111b70d2537c63f40bd1221c9fa96f133a4ea4c49f2c732ee,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60],Size_:512530138,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=07cf06bc-0956-4564-8227-4b10de019c2c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.511876953Z" level=info msg="Creating container: openshift-network-diagnostics/network-check-target-b2mxx/network-check-target-container" id=da0e1d7c-4c71-42b1-85cd-f9ab896b9890 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.511951729Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: Started libcontainer container 4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af. Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9.scope. Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925.scope. Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: Started libcontainer container 01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9. Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: Started libcontainer container 474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925. Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.633811951Z" level=info msg="Created container 4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af: openshift-multus/network-metrics-daemon-5hc5d/kube-rbac-proxy" id=bbcabc48-18c6-4a98-ab26-0d6d31919b56 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.638434206Z" level=info msg="Starting container: 4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af" id=2280a079-83c6-400d-85e3-92a02c47e590 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.676422917Z" level=info msg="Started container" PID=4349 containerID=4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af description=openshift-multus/network-metrics-daemon-5hc5d/kube-rbac-proxy id=2280a079-83c6-400d-85e3-92a02c47e590 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3 Feb 23 15:43:36 ip-10-0-136-68 systemd-udevd[4437]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:36 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 35539c92883319b: link is not ready Feb 23 15:43:36 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 15:43:36 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 15:43:36 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 35539c92883319b: link becomes ready Feb 23 15:43:36 ip-10-0-136-68 systemd-udevd[4437]: Could not generate persistent MAC address for 35539c92883319b: No such file or directory Feb 23 15:43:36 ip-10-0-136-68 NetworkManager[1149]: [1677167016.7066] device (35539c92883319b): carrier: link connected Feb 23 15:43:36 ip-10-0-136-68 NetworkManager[1149]: [1677167016.7068] manager: (35539c92883319b): new Veth device (/org/freedesktop/NetworkManager/Devices/26) Feb 23 15:43:36 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): ff0a102645f986a: link is not ready Feb 23 15:43:36 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ff0a102645f986a: link becomes ready Feb 23 15:43:36 ip-10-0-136-68 NetworkManager[1149]: [1677167016.7187] manager: (ff0a102645f986a): new Veth device (/org/freedesktop/NetworkManager/Devices/27) Feb 23 15:43:36 ip-10-0-136-68 NetworkManager[1149]: [1677167016.7191] device (ff0a102645f986a): carrier: link connected Feb 23 15:43:36 ip-10-0-136-68 systemd-udevd[4449]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:36 ip-10-0-136-68 systemd-udevd[4449]: Could not generate persistent MAC address for ff0a102645f986a: No such file or directory Feb 23 15:43:36 ip-10-0-136-68 NetworkManager[1149]: [1677167016.7320] manager: (35539c92883319b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28) Feb 23 15:43:36 ip-10-0-136-68 kernel: device 35539c92883319b entered promiscuous mode Feb 23 15:43:36 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00089|bridge|INFO|bridge br-int: added interface 35539c92883319b on port 10 Feb 23 15:43:36 ip-10-0-136-68 NetworkManager[1149]: [1677167016.7504] manager: (ff0a102645f986a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29) Feb 23 15:43:36 ip-10-0-136-68 kernel: device ff0a102645f986a entered promiscuous mode Feb 23 15:43:36 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00090|bridge|INFO|bridge br-int: added interface ff0a102645f986a on port 11 Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.761344657Z" level=info msg="Created container 474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni-bincopy" id=650c35fd-4e3d-4ccb-bb28-6faf47f35c93 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.761928014Z" level=info msg="Starting container: 474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925" id=52dc1885-bdc0-4ef3-b871-e7e4bada2b11 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.780389345Z" level=info msg="Started container" PID=4414 containerID=474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925 description=openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni-bincopy id=52dc1885-bdc0-4ef3-b871-e7e4bada2b11 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.783169574Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_6212f810-0377-4e56-b1df-6525ff0c64da\"" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.798131121Z" level=info msg="Created container 01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9: openshift-network-diagnostics/network-check-target-b2mxx/network-check-target-container" id=da0e1d7c-4c71-42b1-85cd-f9ab896b9890 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.798717681Z" level=info msg="Starting container: 01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9" id=3b280256-7f32-447d-88ed-d6b241641d5b name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.805388254Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.805416551Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.812097718Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/whereabouts\"" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.814233 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-ingress-canary/ingress-canary-p47qk] Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.814481489Z" level=info msg="Started container" PID=4393 containerID=01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9 description=openshift-network-diagnostics/network-check-target-b2mxx/network-check-target-container id=3b280256-7f32-447d-88ed-d6b241641d5b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: I0223 15:43:36.702613 4357 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:36Z [verbose] Add: openshift-ingress-canary:ingress-canary-p47qk:a704838c-aeb5-4709-b91c-2460423203a4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"35539c92883319b","mac":"82:05:6e:c5:cc:f5"},{"name":"eth0","mac":"0a:58:0a:81:02:05","sandbox":"/var/run/netns/66097094-74f3-4cd1-b8ec-0513bfaa3c62"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.5/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: I0223 15:43:36.799821 4333 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-p47qk", UID:"a704838c-aeb5-4709-b91c-2460423203a4", APIVersion:"v1", ResourceVersion:"22989", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.5/23] from ovn-kubernetes Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.817264028Z" level=info msg="Got pod network &{Name:ingress-canary-p47qk Namespace:openshift-ingress-canary ID:35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51 UID:a704838c-aeb5-4709-b91c-2460423203a4 NetNS:/var/run/netns/66097094-74f3-4cd1-b8ec-0513bfaa3c62 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.817432059Z" level=info msg="Checking pod openshift-ingress-canary_ingress-canary-p47qk for CNI network multus-cni-network (type=multus)" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.820247474Z" level=info msg="Ran pod sandbox 35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51 with infra container: openshift-ingress-canary/ingress-canary-p47qk/POD" id=943aea56-c7ec-4f95-ad2c-0ce1b498ec26 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.822653920Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea" id=f11e7a80-fac9-43ab-9c03-693f208b087d name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.822808634Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea not found" id=f11e7a80-fac9-43ab-9c03-693f208b087d name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.823600081Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea" id=2d7b53d6-933a-402e-9b57-aeaa3afac146 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.824478670Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea\"" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.827203596Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.827225871Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.827241989Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_6212f810-0377-4e56-b1df-6525ff0c64da\"" Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: crio-474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925.scope: Succeeded. Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: crio-474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925.scope: Consumed 46ms CPU time Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: crio-conmon-474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925.scope: Succeeded. Feb 23 15:43:36 ip-10-0-136-68 systemd[1]: crio-conmon-474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925.scope: Consumed 16ms CPU time Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:36.845237 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-dns/dns-default-h4ftg] Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: I0223 15:43:36.714272 4363 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:36Z [verbose] Add: openshift-dns:dns-default-h4ftg:c072a683-1031-40cb-a1bc-1dac71bca46b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ff0a102645f986a","mac":"56:44:6c:c0:ab:1a"},{"name":"eth0","mac":"0a:58:0a:81:02:06","sandbox":"/var/run/netns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.6/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: I0223 15:43:36.827830 4343 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-h4ftg", UID:"c072a683-1031-40cb-a1bc-1dac71bca46b", APIVersion:"v1", ResourceVersion:"22987", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.6/23] from ovn-kubernetes Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.846601934Z" level=info msg="Got pod network &{Name:dns-default-h4ftg Namespace:openshift-dns ID:ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689 UID:c072a683-1031-40cb-a1bc-1dac71bca46b NetNS:/var/run/netns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.846695297Z" level=info msg="Checking pod openshift-dns_dns-default-h4ftg for CNI network multus-cni-network (type=multus)" Feb 23 15:43:36 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:36.847919 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc072a683_1031_40cb_a1bc_1dac71bca46b.slice/crio-ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689.scope WatchSource:0}: Error finding container ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689: Status 404 returned error can't find the container with id ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689 Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.848916308Z" level=info msg="Ran pod sandbox ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689 with infra container: openshift-dns/dns-default-h4ftg/POD" id=856bec92-aa6d-48c0-bf75-d5ac6152d28c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.849528300Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be" id=2a735274-1e38-4e1f-9e45-f7c48007c314 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.849662552Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be not found" id=2a735274-1e38-4e1f-9e45-f7c48007c314 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.850113717Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be" id=ac64c059-a54d-4f4e-b456-aa84c781b412 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:36 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:36.850919180Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be\"" Feb 23 15:43:37 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:37.086357 2125 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925" exitCode=0 Feb 23 15:43:37 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:37.086416 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925} Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.086787804Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=a9ca7caf-8d32-4b87-b04f-4f757513bdbc name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.086947109Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a051c2cfc108e960dd12d60bc4ee074be58ba53de890a6f33ab7bada80d30890,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66],Size_:476595411,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a9ca7caf-8d32-4b87-b04f-4f757513bdbc name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.087371041Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=a59d33d6-2078-486e-a1f7-c565d5048a61 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:37 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:37.087524 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-b2mxx" event=&{ID:5acce570-9f3b-4dab-9fed-169a4c110f8c Type:ContainerStarted Data:01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9} Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.087521353Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a051c2cfc108e960dd12d60bc4ee074be58ba53de890a6f33ab7bada80d30890,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66],Size_:476595411,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a59d33d6-2078-486e-a1f7-c565d5048a61 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:37 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:37.087644 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.088307971Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni" id=6e2b6b68-0bcf-4bc3-af33-d603d6baac8c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.088397829Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:37 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:37.088467 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerStarted Data:ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689} Feb 23 15:43:37 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:37.089506 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerStarted Data:4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af} Feb 23 15:43:37 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:37.090195 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p47qk" event=&{ID:a704838c-aeb5-4709-b91c-2460423203a4 Type:ContainerStarted Data:35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51} Feb 23 15:43:37 ip-10-0-136-68 systemd[1]: Started crio-conmon-9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5.scope. Feb 23 15:43:37 ip-10-0-136-68 systemd[1]: Started libcontainer container 9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5. Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.208377226Z" level=info msg="Created container 9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni" id=6e2b6b68-0bcf-4bc3-af33-d603d6baac8c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.208760758Z" level=info msg="Starting container: 9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5" id=513caa3e-28ba-4f21-9e43-fcec158313b2 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.213782441Z" level=info msg="Started container" PID=4565 containerID=9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5 description=openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni id=513caa3e-28ba-4f21-9e43-fcec158313b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.215454065Z" level=info msg="CNI monitoring event CREATE \"/etc/kubernetes/cni/net.d/whereabouts.d\"" Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.223364331Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.223388349Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 15:43:37 ip-10-0-136-68 systemd[1]: crio-9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5.scope: Succeeded. Feb 23 15:43:37 ip-10-0-136-68 systemd[1]: crio-9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5.scope: Consumed 25ms CPU time Feb 23 15:43:37 ip-10-0-136-68 systemd[1]: crio-conmon-9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5.scope: Succeeded. Feb 23 15:43:37 ip-10-0-136-68 systemd[1]: crio-conmon-9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5.scope: Consumed 17ms CPU time Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.670201002Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea\"" Feb 23 15:43:37 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:37.715417735Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be\"" Feb 23 15:43:38 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:38.093031 2125 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5" exitCode=0 Feb 23 15:43:38 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:38.093138 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5} Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.093825673Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=4b87882e-f7d3-41dd-97af-37152fe6a85b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.094022320Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3040fba25f1de00fc7180165bb6fe53ee7a27a50b0d5da5af3a7e0d26700e224,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94],Size_:487631698,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4b87882e-f7d3-41dd-97af-37152fe6a85b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.094536917Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=8e77fef8-196d-45ec-bc8c-f2000f8b0b55 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.094679994Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3040fba25f1de00fc7180165bb6fe53ee7a27a50b0d5da5af3a7e0d26700e224,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94],Size_:487631698,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8e77fef8-196d-45ec-bc8c-f2000f8b0b55 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.095192161Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/kube-multus-additional-cni-plugins" id=4d1db7b8-2e06-40a2-b308-a7e0981ed870 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.095304256Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:38 ip-10-0-136-68 systemd[1]: Started crio-conmon-2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2.scope. Feb 23 15:43:38 ip-10-0-136-68 systemd[1]: Started libcontainer container 2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2. Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.297509379Z" level=info msg="Created container 2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2: openshift-multus/multus-additional-cni-plugins-p9nj2/kube-multus-additional-cni-plugins" id=4d1db7b8-2e06-40a2-b308-a7e0981ed870 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.297913907Z" level=info msg="Starting container: 2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2" id=05bbe390-3e53-4c12-be04-ccb5f063aa80 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:38 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:38.303144984Z" level=info msg="Started container" PID=4642 containerID=2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2 description=openshift-multus/multus-additional-cni-plugins-p9nj2/kube-multus-additional-cni-plugins id=05bbe390-3e53-4c12-be04-ccb5f063aa80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 Feb 23 15:43:38 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00091|connmgr|INFO|br-ex<->unix#14: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:43:39 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:39.096025 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerStarted Data:2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2} Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.542995980Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea" id=2d7b53d6-933a-402e-9b57-aeaa3afac146 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.543595296Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea" id=b07e81e1-2efa-4e81-98fe-2c1d26105175 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.544516035Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9fc5d3aadae42f5e9abc5ec66e804749d31c450fba1d3668b87deba226f99d0b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea],Size_:431318980,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b07e81e1-2efa-4e81-98fe-2c1d26105175 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.545031284Z" level=info msg="Creating container: openshift-ingress-canary/ingress-canary-p47qk/serve-healthcheck-canary" id=db8a1a48-b202-4a30-a0bb-dc680f9f5275 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.545105233Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.545331313Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be" id=ac64c059-a54d-4f4e-b456-aa84c781b412 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.545732520Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be" id=eccbf7f7-5f48-437c-8f93-64ef3777f34e name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.546589091Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:cfff9cbcc1f35a742dfed618d177db6bcfa2a1dc53d3f92391463dfd25565a0c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be],Size_:417970927,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=eccbf7f7-5f48-437c-8f93-64ef3777f34e name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.547059269Z" level=info msg="Creating container: openshift-dns/dns-default-h4ftg/dns" id=2c125c2a-f4ca-45be-aad8-a20e1950fe03 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.547121504Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:40 ip-10-0-136-68 systemd[1]: Started crio-conmon-6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80.scope. Feb 23 15:43:40 ip-10-0-136-68 systemd[1]: Started libcontainer container 6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80. Feb 23 15:43:40 ip-10-0-136-68 systemd[1]: Started crio-conmon-2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c.scope. Feb 23 15:43:40 ip-10-0-136-68 systemd[1]: Started libcontainer container 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c. Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.688375568Z" level=info msg="Created container 6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80: openshift-ingress-canary/ingress-canary-p47qk/serve-healthcheck-canary" id=db8a1a48-b202-4a30-a0bb-dc680f9f5275 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.688797912Z" level=info msg="Starting container: 6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80" id=ee4b080b-0cc1-4a4f-814d-8006659f6fd5 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.695163797Z" level=info msg="Started container" PID=4731 containerID=6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80 description=openshift-ingress-canary/ingress-canary-p47qk/serve-healthcheck-canary id=ee4b080b-0cc1-4a4f-814d-8006659f6fd5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51 Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.711767917Z" level=info msg="Created container 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c: openshift-dns/dns-default-h4ftg/dns" id=2c125c2a-f4ca-45be-aad8-a20e1950fe03 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.712927457Z" level=info msg="Starting container: 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c" id=760809bd-0428-44d6-b854-0d1f7e47dcff name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.718355628Z" level=info msg="Started container" PID=4749 containerID=2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c description=openshift-dns/dns-default-h4ftg/dns id=760809bd-0428-44d6-b854-0d1f7e47dcff name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689 Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.727605182Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=3d0d44f5-0253-4498-8de1-befe032e9bca name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.727783941Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3d0d44f5-0253-4498-8de1-befe032e9bca name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.729063193Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=e6f91464-e6eb-46ed-8cab-2986720b52df name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.729243512Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e6f91464-e6eb-46ed-8cab-2986720b52df name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.730027658Z" level=info msg="Creating container: openshift-dns/dns-default-h4ftg/kube-rbac-proxy" id=b7547ed4-7127-415e-8cc4-e32c7e2971e4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.730103028Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:40 ip-10-0-136-68 systemd[1]: Started crio-conmon-bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64.scope. Feb 23 15:43:40 ip-10-0-136-68 systemd[1]: Started libcontainer container bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64. Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.859329882Z" level=info msg="Created container bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64: openshift-dns/dns-default-h4ftg/kube-rbac-proxy" id=b7547ed4-7127-415e-8cc4-e32c7e2971e4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.859790784Z" level=info msg="Starting container: bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64" id=7d387abd-3cb8-4ddc-bb9d-532a2195354e name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:40 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:40.870353760Z" level=info msg="Started container" PID=4820 containerID=bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64 description=openshift-dns/dns-default-h4ftg/kube-rbac-proxy id=7d387abd-3cb8-4ddc-bb9d-532a2195354e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689 Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.100716 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerStarted Data:bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64} Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.100749 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerStarted Data:2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c} Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.101699 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p47qk" event=&{ID:a704838c-aeb5-4709-b91c-2460423203a4 Type:ContainerStarted Data:6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80} Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.156505 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm] Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.156540 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.156847 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-f9wqq] Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.156880 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:43:41 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod27c4fe09_e4f7_452d_9364_2daec20710ff.slice. Feb 23 15:43:41 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podcb82d201_6c85_46b2_9687_01dcb20bf97b.slice. Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.239512 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm] Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.250059 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-f9wqq] Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263383 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-trusted-ca\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263442 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-tls\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263521 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-installation-pull-secrets\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263677 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27c4fe09-e4f7-452d-9364-2daec20710ff-tls-certificates\") pod \"prometheus-operator-admission-webhook-6854f48657-9dfhm\" (UID: \"27c4fe09-e4f7-452d-9364-2daec20710ff\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263723 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-bound-sa-token\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263749 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46ll\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-kube-api-access-x46ll\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263795 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb82d201-6c85-46b2-9687-01dcb20bf97b-ca-trust-extracted\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263877 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-image-registry-private-configuration\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.263928 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-certificates\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364081 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-tls\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364113 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-installation-pull-secrets\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364140 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27c4fe09-e4f7-452d-9364-2daec20710ff-tls-certificates\") pod \"prometheus-operator-admission-webhook-6854f48657-9dfhm\" (UID: \"27c4fe09-e4f7-452d-9364-2daec20710ff\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364170 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-bound-sa-token\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364199 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-x46ll\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-kube-api-access-x46ll\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364229 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb82d201-6c85-46b2-9687-01dcb20bf97b-ca-trust-extracted\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364257 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-image-registry-private-configuration\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364300 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-certificates\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364330 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-trusted-ca\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.364834 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb82d201-6c85-46b2-9687-01dcb20bf97b-ca-trust-extracted\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.365049 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-trusted-ca\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.365362 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-certificates\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.366510 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-tls\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.367098 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-installation-pull-secrets\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.367098 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-image-registry-private-configuration\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.367513 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27c4fe09-e4f7-452d-9364-2daec20710ff-tls-certificates\") pod \"prometheus-operator-admission-webhook-6854f48657-9dfhm\" (UID: \"27c4fe09-e4f7-452d-9364-2daec20710ff\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.379140 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-x46ll\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-kube-api-access-x46ll\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.383872 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-bound-sa-token\") pod \"image-registry-5f79c9c848-f9wqq\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.468413 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.468780220Z" level=info msg="Running pod sandbox: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm/POD" id=f00b0182-a2f6-4905-b8c5-97769f59dda0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.468840671Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.474055 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.474357253Z" level=info msg="Running pod sandbox: openshift-image-registry/image-registry-5f79c9c848-f9wqq/POD" id=e43d7562-bed7-4cb7-9d21-7fca0453303f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.474406852Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.489325190Z" level=info msg="Got pod network &{Name:prometheus-operator-admission-webhook-6854f48657-9dfhm Namespace:openshift-monitoring ID:9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8 UID:27c4fe09-e4f7-452d-9364-2daec20710ff NetNS:/var/run/netns/18443f28-f254-4391-ad17-a04b8bf831a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.489348921Z" level=info msg="Adding pod openshift-monitoring_prometheus-operator-admission-webhook-6854f48657-9dfhm to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.504118693Z" level=info msg="Got pod network &{Name:image-registry-5f79c9c848-f9wqq Namespace:openshift-image-registry ID:95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c UID:cb82d201-6c85-46b2-9687-01dcb20bf97b NetNS:/var/run/netns/9c3b2d60-aa85-41e3-819c-a009d3296b0e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.504139287Z" level=info msg="Adding pod openshift-image-registry_image-registry-5f79c9c848-f9wqq to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 9cc61114cb7d291: link is not ready Feb 23 15:43:41 ip-10-0-136-68 systemd-udevd[4899]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:41 ip-10-0-136-68 systemd-udevd[4899]: Could not generate persistent MAC address for 9cc61114cb7d291: No such file or directory Feb 23 15:43:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 9cc61114cb7d291: link becomes ready Feb 23 15:43:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 95bfaaf9735ddc3: link is not ready Feb 23 15:43:41 ip-10-0-136-68 systemd-udevd[4911]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:41 ip-10-0-136-68 systemd-udevd[4911]: Could not generate persistent MAC address for 95bfaaf9735ddc3: No such file or directory Feb 23 15:43:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 95bfaaf9735ddc3: link becomes ready Feb 23 15:43:41 ip-10-0-136-68 NetworkManager[1149]: [1677167021.6402] device (9cc61114cb7d291): carrier: link connected Feb 23 15:43:41 ip-10-0-136-68 NetworkManager[1149]: [1677167021.6406] manager: (9cc61114cb7d291): new Veth device (/org/freedesktop/NetworkManager/Devices/30) Feb 23 15:43:41 ip-10-0-136-68 NetworkManager[1149]: [1677167021.6415] device (95bfaaf9735ddc3): carrier: link connected Feb 23 15:43:41 ip-10-0-136-68 NetworkManager[1149]: [1677167021.6417] manager: (95bfaaf9735ddc3): new Veth device (/org/freedesktop/NetworkManager/Devices/31) Feb 23 15:43:41 ip-10-0-136-68 kernel: device 9cc61114cb7d291 entered promiscuous mode Feb 23 15:43:41 ip-10-0-136-68 NetworkManager[1149]: [1677167021.6480] manager: (9cc61114cb7d291): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32) Feb 23 15:43:41 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00092|bridge|INFO|bridge br-int: added interface 9cc61114cb7d291 on port 12 Feb 23 15:43:41 ip-10-0-136-68 kernel: device 95bfaaf9735ddc3 entered promiscuous mode Feb 23 15:43:41 ip-10-0-136-68 NetworkManager[1149]: [1677167021.6678] manager: (95bfaaf9735ddc3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33) Feb 23 15:43:41 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00093|bridge|INFO|bridge br-int: added interface 95bfaaf9735ddc3 on port 13 Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: I0223 15:43:41.623560 4877 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:41Z [verbose] Add: openshift-monitoring:prometheus-operator-admission-webhook-6854f48657-9dfhm:27c4fe09-e4f7-452d-9364-2daec20710ff:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9cc61114cb7d291","mac":"82:24:b4:a1:48:1a"},{"name":"eth0","mac":"0a:58:0a:81:02:08","sandbox":"/var/run/netns/18443f28-f254-4391-ad17-a04b8bf831a6"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.8/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: I0223 15:43:41.693758 4864 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"prometheus-operator-admission-webhook-6854f48657-9dfhm", UID:"27c4fe09-e4f7-452d-9364-2daec20710ff", APIVersion:"v1", ResourceVersion:"23129", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.8/23] from ovn-kubernetes Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.710079 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm] Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.710567627Z" level=info msg="Got pod network &{Name:prometheus-operator-admission-webhook-6854f48657-9dfhm Namespace:openshift-monitoring ID:9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8 UID:27c4fe09-e4f7-452d-9364-2daec20710ff NetNS:/var/run/netns/18443f28-f254-4391-ad17-a04b8bf831a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.710715971Z" level=info msg="Checking pod openshift-monitoring_prometheus-operator-admission-webhook-6854f48657-9dfhm for CNI network multus-cni-network (type=multus)" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:41.712332 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27c4fe09_e4f7_452d_9364_2daec20710ff.slice/crio-9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8.scope WatchSource:0}: Error finding container 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8: Status 404 returned error can't find the container with id 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8 Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.713512220Z" level=info msg="Ran pod sandbox 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8 with infra container: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm/POD" id=f00b0182-a2f6-4905-b8c5-97769f59dda0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.714227765Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83" id=787390cd-7ec3-47ae-a5ec-051041963fb7 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.714408430Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83 not found" id=787390cd-7ec3-47ae-a5ec-051041963fb7 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.714761175Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83" id=01ef6b2d-84f9-4420-8698-16a059573625 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.716257830Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83\"" Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:41.737185 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-f9wqq] Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: I0223 15:43:41.631080 4885 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:41Z [verbose] Add: openshift-image-registry:image-registry-5f79c9c848-f9wqq:cb82d201-6c85-46b2-9687-01dcb20bf97b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"95bfaaf9735ddc3","mac":"8e:b9:e4:9c:f7:7c"},{"name":"eth0","mac":"0a:58:0a:81:02:07","sandbox":"/var/run/netns/9c3b2d60-aa85-41e3-819c-a009d3296b0e"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.7/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: I0223 15:43:41.703760 4871 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-5f79c9c848-f9wqq", UID:"cb82d201-6c85-46b2-9687-01dcb20bf97b", APIVersion:"v1", ResourceVersion:"23130", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.7/23] from ovn-kubernetes Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.738589943Z" level=info msg="Got pod network &{Name:image-registry-5f79c9c848-f9wqq Namespace:openshift-image-registry ID:95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c UID:cb82d201-6c85-46b2-9687-01dcb20bf97b NetNS:/var/run/netns/9c3b2d60-aa85-41e3-819c-a009d3296b0e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.738688115Z" level=info msg="Checking pod openshift-image-registry_image-registry-5f79c9c848-f9wqq for CNI network multus-cni-network (type=multus)" Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.741262675Z" level=info msg="Ran pod sandbox 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c with infra container: openshift-image-registry/image-registry-5f79c9c848-f9wqq/POD" id=e43d7562-bed7-4cb7-9d21-7fca0453303f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:41 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:41.741371 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb82d201_6c85_46b2_9687_01dcb20bf97b.slice/crio-95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c.scope WatchSource:0}: Error finding container 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c: Status 404 returned error can't find the container with id 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.741910175Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=04a3bfda-b93e-4817-b699-5284589798f7 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.742072630Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b8111819f25b8194478d55593ca125a634ee92d9d5e61866f09e80f1b59af18b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae],Size_:428240621,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=04a3bfda-b93e-4817-b699-5284589798f7 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.742564119Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=6b5bac8e-ad6f-46e9-a49d-fb1cc995ed34 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.742709484Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b8111819f25b8194478d55593ca125a634ee92d9d5e61866f09e80f1b59af18b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae],Size_:428240621,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6b5bac8e-ad6f-46e9-a49d-fb1cc995ed34 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.743467019Z" level=info msg="Creating container: openshift-image-registry/image-registry-5f79c9c848-f9wqq/registry" id=567bb706-2d86-40ba-a3d8-86daed8c7226 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.743542264Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea.scope. Feb 23 15:43:41 ip-10-0-136-68 systemd[1]: Started libcontainer container acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea. Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.849714250Z" level=info msg="Created container acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea: openshift-image-registry/image-registry-5f79c9c848-f9wqq/registry" id=567bb706-2d86-40ba-a3d8-86daed8c7226 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.850037965Z" level=info msg="Starting container: acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea" id=dbc18700-efab-4503-83a9-ed2113fdd497 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:41 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:41.855746188Z" level=info msg="Started container" PID=4945 containerID=acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea description=openshift-image-registry/image-registry-5f79c9c848-f9wqq/registry id=dbc18700-efab-4503-83a9-ed2113fdd497 name=/runtime.v1.RuntimeService/StartContainer sandboxID=95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c Feb 23 15:43:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:42.104342 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" event=&{ID:27c4fe09-e4f7-452d-9364-2daec20710ff Type:ContainerStarted Data:9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8} Feb 23 15:43:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:42.105953 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" event=&{ID:cb82d201-6c85-46b2-9687-01dcb20bf97b Type:ContainerStarted Data:acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea} Feb 23 15:43:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:42.105978 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" event=&{ID:cb82d201-6c85-46b2-9687-01dcb20bf97b Type:ContainerStarted Data:95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c} Feb 23 15:43:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:42.106365 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:43:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:42.106423 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:42.553585735Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83\"" Feb 23 15:43:44 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:44.615850308Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83" id=01ef6b2d-84f9-4420-8698-16a059573625 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:44 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:44.616495115Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83" id=b81e94f9-43e1-438d-b48c-ca5d60fb840e name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:44 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:44.617471041Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7349fb94605b9a588404c2db5677b270dcc908f8f25eb5d9372a2dfca6163d88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83],Size_:388514099,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b81e94f9-43e1-438d-b48c-ca5d60fb840e name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:44 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:44.618105625Z" level=info msg="Creating container: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm/prometheus-operator-admission-webhook" id=00396539-63fb-47df-8ef1-9a31922b41a7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:44 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:44.618199194Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:44 ip-10-0-136-68 systemd[1]: Started crio-conmon-fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35.scope. Feb 23 15:43:44 ip-10-0-136-68 systemd[1]: Started libcontainer container fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35. Feb 23 15:43:44 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:44.747249018Z" level=info msg="Created container fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm/prometheus-operator-admission-webhook" id=00396539-63fb-47df-8ef1-9a31922b41a7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:44 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:44.747625592Z" level=info msg="Starting container: fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35" id=62a1e421-2568-4489-939a-30e17a097bf0 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:44 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:44.765504094Z" level=info msg="Started container" PID=5030 containerID=fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35 description=openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm/prometheus-operator-admission-webhook id=62a1e421-2568-4489-939a-30e17a097bf0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8 Feb 23 15:43:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:45.111847 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" event=&{ID:27c4fe09-e4f7-452d-9364-2daec20710ff Type:ContainerStarted Data:fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35} Feb 23 15:43:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:45.112125 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" Feb 23 15:43:45 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:45.115846 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" Feb 23 15:43:50 ip-10-0-136-68 systemd[1]: run-runc-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3-runc.M7wYS9.mount: Succeeded. Feb 23 15:43:51 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:51.285058 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-h4ftg" Feb 23 15:43:51 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00094|connmgr|INFO|br-ex<->unix#17: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.058463 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/node-exporter-hw8fk] Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.058507 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:43:52 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod75f4efab_251e_4aa5_97d6_4a2a27025ae1.slice. Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.135838 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.135871 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.135888 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.135913 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.136034 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdk85\" (UniqueName: \"kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.136069 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.136099 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.136121 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.236808 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.236843 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.236860 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.236887 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.236916 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-vdk85\" (UniqueName: \"kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.236944 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.236970 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.237012 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.237051 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.237360 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.237424 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.237547 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.237573 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.238694 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.238993 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.256810 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdk85\" (UniqueName: \"kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:52.369768 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 15:43:52 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:52.370184473Z" level=info msg="Running pod sandbox: openshift-monitoring/node-exporter-hw8fk/POD" id=db33133f-b21d-4a6d-b7b8-811cca3e01e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:52 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:52.370240679Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:52 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00095|connmgr|INFO|br-ex<->unix#20: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:43:52 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:52.652595904Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=db33133f-b21d-4a6d-b7b8-811cca3e01e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:52 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:52.654974 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75f4efab_251e_4aa5_97d6_4a2a27025ae1.slice/crio-02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df.scope WatchSource:0}: Error finding container 02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df: Status 404 returned error can't find the container with id 02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df Feb 23 15:43:52 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:52.656057497Z" level=info msg="Ran pod sandbox 02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df with infra container: openshift-monitoring/node-exporter-hw8fk/POD" id=db33133f-b21d-4a6d-b7b8-811cca3e01e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:52 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:52.656787057Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=4bf4be5e-22cc-4655-a773-e223772ac0c4 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:52 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:52.656932209Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613 not found" id=4bf4be5e-22cc-4655-a773-e223772ac0c4 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:52 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:52.657497713Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=27761ca0-0ed9-421d-ba4d-9459d4dbafc6 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:52 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:52.658477635Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613\"" Feb 23 15:43:52 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00096|connmgr|INFO|br-ex<->unix#23: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:43:53 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:53.124242 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerStarted Data:02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df} Feb 23 15:43:53 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:53.537799798Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613\"" Feb 23 15:43:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00097|connmgr|INFO|br-ex<->unix#31: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:43:55 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:55.392440649Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=27761ca0-0ed9-421d-ba4d-9459d4dbafc6 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:55 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:55.393102888Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=49e21c64-583e-4de1-8efb-c61efb0fbd45 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:55 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:55.393947085Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05b7e8a1fbf3debab1b6ffc89b3540da9556cf7f25a65af04bd4766ad373fac6,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613],Size_:332676464,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=49e21c64-583e-4de1-8efb-c61efb0fbd45 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:55 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:55.394531735Z" level=info msg="Creating container: openshift-monitoring/node-exporter-hw8fk/init-textfile" id=2694b08f-ced4-495a-84b6-1f77b603744b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:55 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:55.394613980Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:55 ip-10-0-136-68 systemd[1]: Started crio-conmon-e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6.scope. Feb 23 15:43:55 ip-10-0-136-68 systemd[1]: Started libcontainer container e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6. Feb 23 15:43:55 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:55.471667347Z" level=info msg="Created container e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6: openshift-monitoring/node-exporter-hw8fk/init-textfile" id=2694b08f-ced4-495a-84b6-1f77b603744b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:55 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:55.472017081Z" level=info msg="Starting container: e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6" id=d72f8be4-9cde-4662-99d0-ddd40863bf47 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:55 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:55.477042067Z" level=info msg="Started container" PID=5241 containerID=e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6 description=openshift-monitoring/node-exporter-hw8fk/init-textfile id=d72f8be4-9cde-4662-99d0-ddd40863bf47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df Feb 23 15:43:55 ip-10-0-136-68 systemd[1]: crio-e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6.scope: Succeeded. Feb 23 15:43:55 ip-10-0-136-68 systemd[1]: crio-e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6.scope: Consumed 59ms CPU time Feb 23 15:43:55 ip-10-0-136-68 systemd[1]: crio-conmon-e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6.scope: Succeeded. Feb 23 15:43:55 ip-10-0-136-68 systemd[1]: crio-conmon-e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6.scope: Consumed 18ms CPU time Feb 23 15:43:56 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:56.130710 2125 generic.go:296] "Generic (PLEG): container finished" podID=75f4efab-251e-4aa5-97d6-4a2a27025ae1 containerID="e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6" exitCode=0 Feb 23 15:43:56 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:56.130751 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerDied Data:e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6} Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.131173671Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=85fb4ed2-6b38-4dc4-9971-12eb42065335 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.132125519Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05b7e8a1fbf3debab1b6ffc89b3540da9556cf7f25a65af04bd4766ad373fac6,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613],Size_:332676464,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=85fb4ed2-6b38-4dc4-9971-12eb42065335 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.133370676Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=0497c70f-bace-4b3b-b979-84313e1773de name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.134338585Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05b7e8a1fbf3debab1b6ffc89b3540da9556cf7f25a65af04bd4766ad373fac6,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613],Size_:332676464,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0497c70f-bace-4b3b-b979-84313e1773de name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.135617110Z" level=info msg="Creating container: openshift-monitoring/node-exporter-hw8fk/node-exporter" id=dc6517f0-e750-453b-b667-e7e015ee21fb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.135714575Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:56 ip-10-0-136-68 systemd[1]: Started crio-conmon-e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d.scope. Feb 23 15:43:56 ip-10-0-136-68 systemd[1]: run-runc-e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d-runc.14ZhKg.mount: Succeeded. Feb 23 15:43:56 ip-10-0-136-68 systemd[1]: Started libcontainer container e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d. Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.222856283Z" level=info msg="Created container e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d: openshift-monitoring/node-exporter-hw8fk/node-exporter" id=dc6517f0-e750-453b-b667-e7e015ee21fb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.223305647Z" level=info msg="Starting container: e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d" id=47b935d5-4a48-484a-a559-d46402286ef7 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.228441598Z" level=info msg="Started container" PID=5368 containerID=e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d description=openshift-monitoring/node-exporter-hw8fk/node-exporter id=47b935d5-4a48-484a-a559-d46402286ef7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.235349082Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=057bff6a-7653-4c6e-b01a-2e6eecf6af39 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.235488710Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=057bff6a-7653-4c6e-b01a-2e6eecf6af39 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.235937479Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=768dddb4-8cba-4c06-9ce6-11287e227af5 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.236045237Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=768dddb4-8cba-4c06-9ce6-11287e227af5 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.236651985Z" level=info msg="Creating container: openshift-monitoring/node-exporter-hw8fk/kube-rbac-proxy" id=78547eca-e4cf-40df-8dd5-a667d9bd4ec1 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.236743889Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:56 ip-10-0-136-68 systemd[1]: Started crio-conmon-14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb.scope. Feb 23 15:43:56 ip-10-0-136-68 systemd[1]: Started libcontainer container 14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb. Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.315466755Z" level=info msg="Created container 14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb: openshift-monitoring/node-exporter-hw8fk/kube-rbac-proxy" id=78547eca-e4cf-40df-8dd5-a667d9bd4ec1 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.315812055Z" level=info msg="Starting container: 14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb" id=891a1cb6-48af-4185-bf12-2384f4e11738 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:43:56 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:56.321309056Z" level=info msg="Started container" PID=5410 containerID=14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb description=openshift-monitoring/node-exporter-hw8fk/kube-rbac-proxy id=891a1cb6-48af-4185-bf12-2384f4e11738 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.132936 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerStarted Data:14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb} Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.132968 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerStarted Data:e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d} Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.309378 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/telemeter-client-675d948766-44b26] Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.309422 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:43:57 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod38f8ec67_c68b_4783_9d06_95eb33506398.slice. Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.467054 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-trusted-ca-bundle\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.467132 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.467163 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.467181 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-client-tls\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.467199 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-metrics-client-ca\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.467256 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxtt\" (UniqueName: \"kubernetes.io/projected/38f8ec67-c68b-4783-9d06-95eb33506398-kube-api-access-2mxtt\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.467295 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-serving-certs-ca-bundle\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.509583 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/telemeter-client-675d948766-44b26] Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.568321 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-client-tls\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.568363 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-metrics-client-ca\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.568391 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxtt\" (UniqueName: \"kubernetes.io/projected/38f8ec67-c68b-4783-9d06-95eb33506398-kube-api-access-2mxtt\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.568416 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-serving-certs-ca-bundle\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.568482 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-trusted-ca-bundle\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.568507 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.568537 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.569077 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-serving-certs-ca-bundle\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.569077 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-metrics-client-ca\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.569324 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-trusted-ca-bundle\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.570647 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.571134 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-client-tls\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.571217 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.607819 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxtt\" (UniqueName: \"kubernetes.io/projected/38f8ec67-c68b-4783-9d06-95eb33506398-kube-api-access-2mxtt\") pod \"telemeter-client-675d948766-44b26\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.620455 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-675d948766-44b26" Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.620857201Z" level=info msg="Running pod sandbox: openshift-monitoring/telemeter-client-675d948766-44b26/POD" id=fe053dac-ed0c-4217-9bd5-1b8e433f64ac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.620908850Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.637151267Z" level=info msg="Got pod network &{Name:telemeter-client-675d948766-44b26 Namespace:openshift-monitoring ID:371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6 UID:38f8ec67-c68b-4783-9d06-95eb33506398 NetNS:/var/run/netns/1e8991bd-00bf-4b0d-9875-34d62bb269d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.637179400Z" level=info msg="Adding pod openshift-monitoring_telemeter-client-675d948766-44b26 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:57 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 371d339c2a21dac: link is not ready Feb 23 15:43:57 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 371d339c2a21dac: link becomes ready Feb 23 15:43:57 ip-10-0-136-68 NetworkManager[1149]: [1677167037.7583] manager: (371d339c2a21dac): new Veth device (/org/freedesktop/NetworkManager/Devices/34) Feb 23 15:43:57 ip-10-0-136-68 NetworkManager[1149]: [1677167037.7588] device (371d339c2a21dac): carrier: link connected Feb 23 15:43:57 ip-10-0-136-68 systemd-udevd[5471]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:57 ip-10-0-136-68 systemd-udevd[5471]: Could not generate persistent MAC address for 371d339c2a21dac: No such file or directory Feb 23 15:43:57 ip-10-0-136-68 kernel: device 371d339c2a21dac entered promiscuous mode Feb 23 15:43:57 ip-10-0-136-68 NetworkManager[1149]: [1677167037.7803] manager: (371d339c2a21dac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35) Feb 23 15:43:57 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00098|bridge|INFO|bridge br-int: added interface 371d339c2a21dac on port 14 Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:57.857793 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/telemeter-client-675d948766-44b26] Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: I0223 15:43:57.755955 5461 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:57Z [verbose] Add: openshift-monitoring:telemeter-client-675d948766-44b26:38f8ec67-c68b-4783-9d06-95eb33506398:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"371d339c2a21dac","mac":"32:b1:ee:14:24:2b"},{"name":"eth0","mac":"0a:58:0a:81:02:09","sandbox":"/var/run/netns/1e8991bd-00bf-4b0d-9875-34d62bb269d4"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.9/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: I0223 15:43:57.815845 5454 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"telemeter-client-675d948766-44b26", UID:"38f8ec67-c68b-4783-9d06-95eb33506398", APIVersion:"v1", ResourceVersion:"23913", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.9/23] from ovn-kubernetes Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.859568454Z" level=info msg="Got pod network &{Name:telemeter-client-675d948766-44b26 Namespace:openshift-monitoring ID:371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6 UID:38f8ec67-c68b-4783-9d06-95eb33506398 NetNS:/var/run/netns/1e8991bd-00bf-4b0d-9875-34d62bb269d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.859722346Z" level=info msg="Checking pod openshift-monitoring_telemeter-client-675d948766-44b26 for CNI network multus-cni-network (type=multus)" Feb 23 15:43:57 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:57.861470 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38f8ec67_c68b_4783_9d06_95eb33506398.slice/crio-371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6.scope WatchSource:0}: Error finding container 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6: Status 404 returned error can't find the container with id 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6 Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.862684661Z" level=info msg="Ran pod sandbox 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6 with infra container: openshift-monitoring/telemeter-client-675d948766-44b26/POD" id=fe053dac-ed0c-4217-9bd5-1b8e433f64ac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.863385951Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02c9c2a2ef5941156a767d40abe900a669a8237a8444a3974b360c326a21ffc3" id=024f958a-fd88-40d3-a0e2-d6fbb50f36f0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.863536061Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02c9c2a2ef5941156a767d40abe900a669a8237a8444a3974b360c326a21ffc3 not found" id=024f958a-fd88-40d3-a0e2-d6fbb50f36f0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.864042255Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02c9c2a2ef5941156a767d40abe900a669a8237a8444a3974b360c326a21ffc3" id=7f3ab278-7024-4075-b7d2-1da892c43649 name=/runtime.v1.ImageService/PullImage Feb 23 15:43:57 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:57.864861452Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02c9c2a2ef5941156a767d40abe900a669a8237a8444a3974b360c326a21ffc3\"" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.052260 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/alertmanager-main-0] Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.052341 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:43:58 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod3ac4a081_240b_441a_af97_d682fecb3ae7.slice. Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.122518 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/alertmanager-main-0] Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.135614 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-675d948766-44b26" event=&{ID:38f8ec67-c68b-4783-9d06-95eb33506398 Type:ContainerStarted Data:371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6} Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.172934 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.172973 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-config-out\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.172993 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173019 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173077 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173135 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173154 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173176 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-proxy\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173203 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-web-config\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173254 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9kht\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-kube-api-access-v9kht\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173357 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-config-volume\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.173387 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.273960 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-config-out\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274004 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274029 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274058 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274084 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274123 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274153 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-proxy\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274179 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-web-config\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274240 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-v9kht\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-kube-api-access-v9kht\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274273 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-config-volume\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274318 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274371 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274377 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-config-out\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.274815 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.275175 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.275406 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.276913 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.277206 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.277313 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.277384 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.278540 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-config-volume\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.279234 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-proxy\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.279266 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-web-config\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.289708 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9kht\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-kube-api-access-v9kht\") pod \"alertmanager-main-0\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.367815 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.368267842Z" level=info msg="Running pod sandbox: openshift-monitoring/alertmanager-main-0/POD" id=7499b9bc-9717-42b3-a1cb-d36b6fb01452 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.368334388Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.383440750Z" level=info msg="Got pod network &{Name:alertmanager-main-0 Namespace:openshift-monitoring ID:fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be UID:3ac4a081-240b-441a-af97-d682fecb3ae7 NetNS:/var/run/netns/25ff3566-52b5-4f2b-a3c1-a0c8d76cdece Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.383480140Z" level=info msg="Adding pod openshift-monitoring_alertmanager-main-0 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:58 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): fefe52f4c0671ea: link is not ready Feb 23 15:43:58 ip-10-0-136-68 NetworkManager[1149]: [1677167038.5003] manager: (fefe52f4c0671ea): new Veth device (/org/freedesktop/NetworkManager/Devices/36) Feb 23 15:43:58 ip-10-0-136-68 systemd-udevd[5521]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:58 ip-10-0-136-68 systemd-udevd[5521]: Could not generate persistent MAC address for fefe52f4c0671ea: No such file or directory Feb 23 15:43:58 ip-10-0-136-68 NetworkManager[1149]: [1677167038.5008] device (fefe52f4c0671ea): carrier: link connected Feb 23 15:43:58 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): fefe52f4c0671ea: link becomes ready Feb 23 15:43:58 ip-10-0-136-68 kernel: device fefe52f4c0671ea entered promiscuous mode Feb 23 15:43:58 ip-10-0-136-68 NetworkManager[1149]: [1677167038.5194] manager: (fefe52f4c0671ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37) Feb 23 15:43:58 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00099|bridge|INFO|bridge br-int: added interface fefe52f4c0671ea on port 15 Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:58.577522 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/alertmanager-main-0] Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: I0223 15:43:58.498002 5511 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:58Z [verbose] Add: openshift-monitoring:alertmanager-main-0:3ac4a081-240b-441a-af97-d682fecb3ae7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fefe52f4c0671ea","mac":"d6:2a:b7:dc:89:bd"},{"name":"eth0","mac":"0a:58:0a:81:02:0a","sandbox":"/var/run/netns/25ff3566-52b5-4f2b-a3c1-a0c8d76cdece"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.10/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: I0223 15:43:58.558808 5504 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"alertmanager-main-0", UID:"3ac4a081-240b-441a-af97-d682fecb3ae7", APIVersion:"v1", ResourceVersion:"23968", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.10/23] from ovn-kubernetes Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.578647247Z" level=info msg="Got pod network &{Name:alertmanager-main-0 Namespace:openshift-monitoring ID:fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be UID:3ac4a081-240b-441a-af97-d682fecb3ae7 NetNS:/var/run/netns/25ff3566-52b5-4f2b-a3c1-a0c8d76cdece Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.578779780Z" level=info msg="Checking pod openshift-monitoring_alertmanager-main-0 for CNI network multus-cni-network (type=multus)" Feb 23 15:43:58 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:58.581507 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ac4a081_240b_441a_af97_d682fecb3ae7.slice/crio-fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be.scope WatchSource:0}: Error finding container fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be: Status 404 returned error can't find the container with id fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.582165016Z" level=info msg="Ran pod sandbox fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be with infra container: openshift-monitoring/alertmanager-main-0/POD" id=7499b9bc-9717-42b3-a1cb-d36b6fb01452 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.582903768Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=3a6be0fe-2271-43e0-88a0-f0ffde489803 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.583044891Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736 not found" id=3a6be0fe-2271-43e0-88a0-f0ffde489803 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.583534454Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=c5f804a6-c0cd-4df3-95f2-4f946d4afcef name=/runtime.v1.ImageService/PullImage Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.584477178Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736\"" Feb 23 15:43:58 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:58.722182505Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02c9c2a2ef5941156a767d40abe900a669a8237a8444a3974b360c326a21ffc3\"" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.067424 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-892l6] Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.067470 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:43:59 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podb64af5e5_e41c_4886_a88b_39556a3f4b21.slice. Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.120270 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-892l6] Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.138226 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerStarted Data:fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be} Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181366 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-oauth-cookie\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181406 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181439 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-metrics-client-ca\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181497 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzg2g\" (UniqueName: \"kubernetes.io/projected/b64af5e5-e41c-4886-a88b-39556a3f4b21-kube-api-access-dzg2g\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181518 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181622 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-grpc-tls\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181692 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181724 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-thanos-querier-trusted-ca-bundle\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.181791 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-tls\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.282735 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-tls\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.282792 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-oauth-cookie\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.282823 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.282850 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-metrics-client-ca\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.282880 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-dzg2g\" (UniqueName: \"kubernetes.io/projected/b64af5e5-e41c-4886-a88b-39556a3f4b21-kube-api-access-dzg2g\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.282915 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.282942 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-grpc-tls\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.282978 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.283004 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-thanos-querier-trusted-ca-bundle\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.283736 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-thanos-querier-trusted-ca-bundle\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.283768 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-metrics-client-ca\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.285539 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.285723 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.285828 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-tls\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.286119 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-oauth-cookie\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.287252 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.287466 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-grpc-tls\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.301853 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzg2g\" (UniqueName: \"kubernetes.io/projected/b64af5e5-e41c-4886-a88b-39556a3f4b21-kube-api-access-dzg2g\") pod \"thanos-querier-8654d9f96d-892l6\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.379425 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.379898828Z" level=info msg="Running pod sandbox: openshift-monitoring/thanos-querier-8654d9f96d-892l6/POD" id=afa7fd9e-5b51-4e26-903a-3054d8ecde0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.379963491Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.395073202Z" level=info msg="Got pod network &{Name:thanos-querier-8654d9f96d-892l6 Namespace:openshift-monitoring ID:8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 UID:b64af5e5-e41c-4886-a88b-39556a3f4b21 NetNS:/var/run/netns/aab5b03a-6d12-49fb-9628-2d412137c7fb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.395100770Z" level=info msg="Adding pod openshift-monitoring_thanos-querier-8654d9f96d-892l6 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.465561812Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736\"" Feb 23 15:43:59 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 8a6086c30905e97: link is not ready Feb 23 15:43:59 ip-10-0-136-68 systemd-udevd[5569]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:43:59 ip-10-0-136-68 systemd-udevd[5569]: Could not generate persistent MAC address for 8a6086c30905e97: No such file or directory Feb 23 15:43:59 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 8a6086c30905e97: link becomes ready Feb 23 15:43:59 ip-10-0-136-68 NetworkManager[1149]: [1677167039.5257] device (8a6086c30905e97): carrier: link connected Feb 23 15:43:59 ip-10-0-136-68 NetworkManager[1149]: [1677167039.5260] manager: (8a6086c30905e97): new Veth device (/org/freedesktop/NetworkManager/Devices/38) Feb 23 15:43:59 ip-10-0-136-68 kernel: device 8a6086c30905e97 entered promiscuous mode Feb 23 15:43:59 ip-10-0-136-68 NetworkManager[1149]: [1677167039.5465] manager: (8a6086c30905e97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39) Feb 23 15:43:59 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00100|bridge|INFO|bridge br-int: added interface 8a6086c30905e97 on port 16 Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:43:59.606048 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-892l6] Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: I0223 15:43:59.521872 5559 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: 2023-02-23T15:43:59Z [verbose] Add: openshift-monitoring:thanos-querier-8654d9f96d-892l6:b64af5e5-e41c-4886-a88b-39556a3f4b21:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8a6086c30905e97","mac":"5a:da:ed:4b:0a:05"},{"name":"eth0","mac":"0a:58:0a:81:02:0b","sandbox":"/var/run/netns/aab5b03a-6d12-49fb-9628-2d412137c7fb"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.11/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: I0223 15:43:59.584450 5552 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"thanos-querier-8654d9f96d-892l6", UID:"b64af5e5-e41c-4886-a88b-39556a3f4b21", APIVersion:"v1", ResourceVersion:"24005", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.11/23] from ovn-kubernetes Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.607157465Z" level=info msg="Got pod network &{Name:thanos-querier-8654d9f96d-892l6 Namespace:openshift-monitoring ID:8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 UID:b64af5e5-e41c-4886-a88b-39556a3f4b21 NetNS:/var/run/netns/aab5b03a-6d12-49fb-9628-2d412137c7fb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.607300107Z" level=info msg="Checking pod openshift-monitoring_thanos-querier-8654d9f96d-892l6 for CNI network multus-cni-network (type=multus)" Feb 23 15:43:59 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:43:59.608773 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb64af5e5_e41c_4886_a88b_39556a3f4b21.slice/crio-8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1.scope WatchSource:0}: Error finding container 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1: Status 404 returned error can't find the container with id 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.609821830Z" level=info msg="Ran pod sandbox 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 with infra container: openshift-monitoring/thanos-querier-8654d9f96d-892l6/POD" id=afa7fd9e-5b51-4e26-903a-3054d8ecde0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.610583090Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=258a4ab4-49fb-457b-bf4a-d4fcc7984e78 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.610725411Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec not found" id=258a4ab4-49fb-457b-bf4a-d4fcc7984e78 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.611217034Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=aa68a32e-d766-4f16-81a8-ec2ba54de84b name=/runtime.v1.ImageService/PullImage Feb 23 15:43:59 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:43:59.612009288Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec\"" Feb 23 15:44:00 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:00.140276 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerStarted Data:8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1} Feb 23 15:44:00 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:00.533837311Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec\"" Feb 23 15:44:00 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:00.928051261Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02c9c2a2ef5941156a767d40abe900a669a8237a8444a3974b360c326a21ffc3" id=7f3ab278-7024-4075-b7d2-1da892c43649 name=/runtime.v1.ImageService/PullImage Feb 23 15:44:00 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:00.928800148Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02c9c2a2ef5941156a767d40abe900a669a8237a8444a3974b360c326a21ffc3" id=e9334fce-ae6a-454d-a191-809028046587 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:00 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:00.929749546Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:c472af0112c52366e300281db2d68047eae1bcf754b45b82cd27754682baa05d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02c9c2a2ef5941156a767d40abe900a669a8237a8444a3974b360c326a21ffc3],Size_:395287840,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e9334fce-ae6a-454d-a191-809028046587 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:00 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:00.931534453Z" level=info msg="Creating container: openshift-monitoring/telemeter-client-675d948766-44b26/telemeter-client" id=bf0ea562-766e-4e61-b63b-944030688dd2 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:00 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:00.931643496Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:00 ip-10-0-136-68 systemd[1]: Started crio-conmon-b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da.scope. Feb 23 15:44:00 ip-10-0-136-68 systemd[1]: Started libcontainer container b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da. Feb 23 15:44:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:01.069992832Z" level=info msg="Created container b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da: openshift-monitoring/telemeter-client-675d948766-44b26/telemeter-client" id=bf0ea562-766e-4e61-b63b-944030688dd2 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:01.070439718Z" level=info msg="Starting container: b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da" id=a54dddf5-d6ce-4f79-8417-0f157ba85763 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:01.088809584Z" level=info msg="Started container" PID=5631 containerID=b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da description=openshift-monitoring/telemeter-client-675d948766-44b26/telemeter-client id=a54dddf5-d6ce-4f79-8417-0f157ba85763 name=/runtime.v1.RuntimeService/StartContainer sandboxID=371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6 Feb 23 15:44:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:01.099071446Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=86310c6d-dd5e-4ce7-8a57-aa1fd8c9d93f name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:01.099230398Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008 not found" id=86310c6d-dd5e-4ce7-8a57-aa1fd8c9d93f name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:01.099756387Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=d1f5d9dd-e7cc-482e-beec-f1234bdd7483 name=/runtime.v1.ImageService/PullImage Feb 23 15:44:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:01.100698436Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008\"" Feb 23 15:44:01 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:01.142919 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-675d948766-44b26" event=&{ID:38f8ec67-c68b-4783-9d06-95eb33506398 Type:ContainerStarted Data:b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da} Feb 23 15:44:01 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:01.478792 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" Feb 23 15:44:01 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:01.957276 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:44:01 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:01.957349 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:44:01 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod6dabd947_ddab_4fdb_9e78_cb27f3551554.slice. Feb 23 15:44:01 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:01.992497271Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008\"" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.067631 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.102832 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.102887 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.102920 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.102945 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.102973 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl2lv\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-kube-api-access-vl2lv\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.102998 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-web-config\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103030 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103065 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103094 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103132 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103167 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103196 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-config\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103224 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103256 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103308 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103341 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103372 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103404 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-config-out\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.103440 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205009 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205072 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205108 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205145 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-vl2lv\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-kube-api-access-vl2lv\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205179 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-web-config\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205213 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205244 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205277 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205329 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205366 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205399 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-config\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205426 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205461 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205493 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205523 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205553 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205580 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-config-out\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205612 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205644 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205790 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.205968 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.206350 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.206742 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.207221 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.207415 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-config-out\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.207734 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.209675 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.212718 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.214318 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.223938 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.226608 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.229767 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-web-config\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.232003 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.233856 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.235543 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-config\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.237000 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.238596 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.240369 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl2lv\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-kube-api-access-vl2lv\") pod \"prometheus-k8s-0\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.276478 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.277032125Z" level=info msg="Running pod sandbox: openshift-monitoring/prometheus-k8s-0/POD" id=ecd01ff0-0686-48b5-bed7-c8c74c126a7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.277093864Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.657329877Z" level=info msg="Got pod network &{Name:prometheus-k8s-0 Namespace:openshift-monitoring ID:e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 UID:6dabd947-ddab-4fdb-9e78-cb27f3551554 NetNS:/var/run/netns/f75b780b-87af-4e85-b9a2-72004867fd14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.657360799Z" level=info msg="Adding pod openshift-monitoring_prometheus-k8s-0 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.685216105Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=c5f804a6-c0cd-4df3-95f2-4f946d4afcef name=/runtime.v1.ImageService/PullImage Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.689070683Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=27a9844d-50ae-4971-83d6-f2595a0704ee name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.690101422Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=27a9844d-50ae-4971-83d6-f2595a0704ee name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.691129175Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-0/alertmanager" id=341d6c63-1826-4e20-9f3e-731758457b2f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.691234601Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:02 ip-10-0-136-68 systemd[1]: Started crio-conmon-52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c.scope. Feb 23 15:44:02 ip-10-0-136-68 systemd[1]: Started libcontainer container 52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c. Feb 23 15:44:02 ip-10-0-136-68 systemd-udevd[5737]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:44:02 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): e274954033b5b31: link is not ready Feb 23 15:44:02 ip-10-0-136-68 systemd-udevd[5737]: Could not generate persistent MAC address for e274954033b5b31: No such file or directory Feb 23 15:44:02 ip-10-0-136-68 NetworkManager[1149]: [1677167042.8093] manager: (e274954033b5b31): new Veth device (/org/freedesktop/NetworkManager/Devices/40) Feb 23 15:44:02 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 15:44:02 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 15:44:02 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): e274954033b5b31: link becomes ready Feb 23 15:44:02 ip-10-0-136-68 NetworkManager[1149]: [1677167042.8116] device (e274954033b5b31): carrier: link connected Feb 23 15:44:02 ip-10-0-136-68 NetworkManager[1149]: [1677167042.8345] manager: (e274954033b5b31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41) Feb 23 15:44:02 ip-10-0-136-68 kernel: device e274954033b5b31 entered promiscuous mode Feb 23 15:44:02 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00101|bridge|INFO|bridge br-int: added interface e274954033b5b31 on port 17 Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.852830662Z" level=info msg="Created container 52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c: openshift-monitoring/alertmanager-main-0/alertmanager" id=341d6c63-1826-4e20-9f3e-731758457b2f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.854177979Z" level=info msg="Starting container: 52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c" id=cea03e08-4cd8-4a23-8760-3503cceb90df name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.876111973Z" level=info msg="Started container" PID=5729 containerID=52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c description=openshift-monitoring/alertmanager-main-0/alertmanager id=cea03e08-4cd8-4a23-8760-3503cceb90df name=/runtime.v1.RuntimeService/StartContainer sandboxID=fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.887795899Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=c7942994-e650-4d2c-b118-14a2940c3569 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.887985141Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008 not found" id=c7942994-e650-4d2c-b118-14a2940c3569 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.888681036Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=28e5000a-8abb-44bc-89c1-2a33bcc17fdc name=/runtime.v1.ImageService/PullImage Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.889490410Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008\"" Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: I0223 15:44:02.806438 5707 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: 2023-02-23T15:44:02Z [verbose] Add: openshift-monitoring:prometheus-k8s-0:6dabd947-ddab-4fdb-9e78-cb27f3551554:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e274954033b5b31","mac":"72:15:97:80:bf:41"},{"name":"eth0","mac":"0a:58:0a:81:02:0c","sandbox":"/var/run/netns/f75b780b-87af-4e85-b9a2-72004867fd14"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.12/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: I0223 15:44:02.877768 5698 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"prometheus-k8s-0", UID:"6dabd947-ddab-4fdb-9e78-cb27f3551554", APIVersion:"v1", ResourceVersion:"24103", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.12/23] from ovn-kubernetes Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.904893457Z" level=info msg="Got pod network &{Name:prometheus-k8s-0 Namespace:openshift-monitoring ID:e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 UID:6dabd947-ddab-4fdb-9e78-cb27f3551554 NetNS:/var/run/netns/f75b780b-87af-4e85-b9a2-72004867fd14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.905030185Z" level=info msg="Checking pod openshift-monitoring_prometheus-k8s-0 for CNI network multus-cni-network (type=multus)" Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:02.905065 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:44:02 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:44:02.906737 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dabd947_ddab_4fdb_9e78_cb27f3551554.slice/crio-e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918.scope WatchSource:0}: Error finding container e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918: Status 404 returned error can't find the container with id e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.908747446Z" level=info msg="Ran pod sandbox e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 with infra container: openshift-monitoring/prometheus-k8s-0/POD" id=ecd01ff0-0686-48b5-bed7-c8c74c126a7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.909845734Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=c6ea2f6a-5550-42cc-abbc-b5ac83c2d3a3 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.909996159Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008 not found" id=c6ea2f6a-5550-42cc-abbc-b5ac83c2d3a3 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.910482074Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=d6dd277a-0820-47a5-a0f3-528d5b750569 name=/runtime.v1.ImageService/PullImage Feb 23 15:44:02 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:02.911666340Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008\"" Feb 23 15:44:03 ip-10-0-136-68 conmon[5714]: conmon 52c92d084eb346ba5baa : container 5729 exited with status 1 Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: crio-52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c.scope: Succeeded. Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: crio-52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c.scope: Consumed 91ms CPU time Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: crio-conmon-52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c.scope: Succeeded. Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: crio-conmon-52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c.scope: Consumed 21ms CPU time Feb 23 15:44:03 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:03.146758 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerStarted Data:e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918} Feb 23 15:44:03 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:03.147587 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager/0.log" Feb 23 15:44:03 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:03.147633 2125 generic.go:296] "Generic (PLEG): container finished" podID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerID="52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c" exitCode=1 Feb 23 15:44:03 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:03.147654 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerDied Data:52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c} Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.458244931Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=aa68a32e-d766-4f16-81a8-ec2ba54de84b name=/runtime.v1.ImageService/PullImage Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.458985238Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=03e0b99d-b1f1-4cb7-9d9e-71580bf272b0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.459866125Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=03e0b99d-b1f1-4cb7-9d9e-71580bf272b0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.460569330Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-892l6/thanos-query" id=5d6f1455-9d27-4e0f-b6eb-5f0559bdfb25 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.460645179Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: Started crio-conmon-9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8.scope. Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: run-runc-9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8-runc.vhXPje.mount: Succeeded. Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: Started libcontainer container 9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8. Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.574687583Z" level=info msg="Created container 9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8: openshift-monitoring/thanos-querier-8654d9f96d-892l6/thanos-query" id=5d6f1455-9d27-4e0f-b6eb-5f0559bdfb25 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.575126887Z" level=info msg="Starting container: 9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8" id=ddbc56ff-759d-4d3a-86a3-c1db6d8bf8d4 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.581668009Z" level=info msg="Started container" PID=5820 containerID=9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8 description=openshift-monitoring/thanos-querier-8654d9f96d-892l6/thanos-query id=ddbc56ff-759d-4d3a-86a3-c1db6d8bf8d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.591189915Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=4042718c-4588-4f9e-a063-3037fdd3782a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.591389103Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4042718c-4588-4f9e-a063-3037fdd3782a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.592879480Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=54bffb97-e07a-4c4f-bb71-556efa084502 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.593035613Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=54bffb97-e07a-4c4f-bb71-556efa084502 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.593739449Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-892l6/oauth-proxy" id=7ed95812-a954-4126-99d0-810d8c6dca6d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.593846601Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: Started crio-conmon-bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5.scope. Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: Started libcontainer container bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5. Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.712563715Z" level=info msg="Created container bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5: openshift-monitoring/thanos-querier-8654d9f96d-892l6/oauth-proxy" id=7ed95812-a954-4126-99d0-810d8c6dca6d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.713004840Z" level=info msg="Starting container: bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5" id=27a83002-9139-4c5e-a50c-e80f555c95eb name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.719224994Z" level=info msg="Started container" PID=5869 containerID=bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5 description=openshift-monitoring/thanos-querier-8654d9f96d-892l6/oauth-proxy id=27a83002-9139-4c5e-a50c-e80f555c95eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.729801941Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=0129afbf-90a5-4aff-9c69-759a26217d2b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.729963023Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0129afbf-90a5-4aff-9c69-759a26217d2b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.730712895Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=e15ef2dc-6969-4464-ac5d-7db260ab2f5d name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.730846216Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e15ef2dc-6969-4464-ac5d-7db260ab2f5d name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.731601075Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy" id=e5f08eb9-3248-4b63-a1ab-2ffb4d9ae729 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.731693709Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.743083907Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008\"" Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: Started crio-conmon-d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064.scope. Feb 23 15:44:03 ip-10-0-136-68 systemd[1]: Started libcontainer container d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064. Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.790330876Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008\"" Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.863223349Z" level=info msg="Created container d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy" id=e5f08eb9-3248-4b63-a1ab-2ffb4d9ae729 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.863760165Z" level=info msg="Starting container: d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064" id=cf49f220-6892-4a06-bf3b-eecd706fe529 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.874802550Z" level=info msg="Started container" PID=5913 containerID=d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064 description=openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy id=cf49f220-6892-4a06-bf3b-eecd706fe529 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.893137796Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=aa066523-6b48-4fde-9b6c-68ff6ff325b2 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.893349103Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed not found" id=aa066523-6b48-4fde-9b6c-68ff6ff325b2 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.894836313Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=aa6c84bc-87e7-498d-9762-40aa6f6a8cee name=/runtime.v1.ImageService/PullImage Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.895781222Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed\"" Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.993859928Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=d1f5d9dd-e7cc-482e-beec-f1234bdd7483 name=/runtime.v1.ImageService/PullImage Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.995494084Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=e2dc5593-51d9-40fd-ba12-b07b4cff2a21 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.996463894Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e2dc5593-51d9-40fd-ba12-b07b4cff2a21 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.997081066Z" level=info msg="Creating container: openshift-monitoring/telemeter-client-675d948766-44b26/reload" id=13f1a413-1a05-4baa-8909-ce6fd09462fd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:03 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:03.997157154Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:04 ip-10-0-136-68 systemd[1]: Started crio-conmon-29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152.scope. Feb 23 15:44:04 ip-10-0-136-68 systemd[1]: Started libcontainer container 29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152. Feb 23 15:44:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:04.150587 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerStarted Data:d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064} Feb 23 15:44:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:04.150623 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerStarted Data:bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5} Feb 23 15:44:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:04.150635 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerStarted Data:9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8} Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.254241434Z" level=info msg="Created container 29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152: openshift-monitoring/telemeter-client-675d948766-44b26/reload" id=13f1a413-1a05-4baa-8909-ce6fd09462fd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.254705393Z" level=info msg="Starting container: 29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152" id=a5c1b63b-d790-4e2b-b0c2-fb56f9362ec4 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.261265184Z" level=info msg="Started container" PID=5964 containerID=29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152 description=openshift-monitoring/telemeter-client-675d948766-44b26/reload id=a5c1b63b-d790-4e2b-b0c2-fb56f9362ec4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6 Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.269313759Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=c112a954-a4ee-4832-af98-0894bde33733 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.269465154Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c112a954-a4ee-4832-af98-0894bde33733 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.269963170Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=36309b19-5937-4e09-a23b-da29d2cf4a0c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.270078123Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=36309b19-5937-4e09-a23b-da29d2cf4a0c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.270765643Z" level=info msg="Creating container: openshift-monitoring/telemeter-client-675d948766-44b26/kube-rbac-proxy" id=613478d0-98cc-45bd-8d5b-7a706441c508 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.270868911Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:04 ip-10-0-136-68 systemd[1]: Started crio-conmon-94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e.scope. Feb 23 15:44:04 ip-10-0-136-68 systemd[1]: run-runc-94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e-runc.LREhK2.mount: Succeeded. Feb 23 15:44:04 ip-10-0-136-68 systemd[1]: Started libcontainer container 94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e. Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.382071875Z" level=info msg="Created container 94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e: openshift-monitoring/telemeter-client-675d948766-44b26/kube-rbac-proxy" id=613478d0-98cc-45bd-8d5b-7a706441c508 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.382481487Z" level=info msg="Starting container: 94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e" id=24c76813-8e19-4508-9dad-cdb341499f4c name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.388245720Z" level=info msg="Started container" PID=6012 containerID=94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e description=openshift-monitoring/telemeter-client-675d948766-44b26/kube-rbac-proxy id=24c76813-8e19-4508-9dad-cdb341499f4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6 Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.759773337Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed\"" Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.945726680Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=28e5000a-8abb-44bc-89c1-2a33bcc17fdc name=/runtime.v1.ImageService/PullImage Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.947211790Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=e7a14557-e80a-46b5-a955-9f66bf7cd11d name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.948194845Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e7a14557-e80a-46b5-a955-9f66bf7cd11d name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.948929693Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-0/config-reloader" id=a3821365-d4f9-43d4-b51f-59a9a57ecbcc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.949025192Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:04 ip-10-0-136-68 systemd[1]: Started crio-conmon-9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565.scope. Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.971528633Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=d6dd277a-0820-47a5-a0f3-528d5b750569 name=/runtime.v1.ImageService/PullImage Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.972030052Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=4434069c-bfff-41fe-b518-06f8687ab113 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.972979052Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4434069c-bfff-41fe-b518-06f8687ab113 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.973637852Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/init-config-reloader" id=38b170d8-cfe7-4780-855e-c57d6d896f26 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:04.973723765Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:04 ip-10-0-136-68 systemd[1]: Started libcontainer container 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565. Feb 23 15:44:04 ip-10-0-136-68 systemd[1]: Started crio-conmon-57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb.scope. Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: Started libcontainer container 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb. Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.056473851Z" level=info msg="Created container 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565: openshift-monitoring/alertmanager-main-0/config-reloader" id=a3821365-d4f9-43d4-b51f-59a9a57ecbcc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.056842628Z" level=info msg="Starting container: 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565" id=7a45127c-bd96-4906-a79e-3820d51e657b name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.062216047Z" level=info msg="Started container" PID=6074 containerID=9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565 description=openshift-monitoring/alertmanager-main-0/config-reloader id=7a45127c-bd96-4906-a79e-3820d51e657b name=/runtime.v1.RuntimeService/StartContainer sandboxID=fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.069657298Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=d81058d8-ea39-4209-ac87-3612d9616a5a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.069822561Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d81058d8-ea39-4209-ac87-3612d9616a5a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.070388969Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=af831573-bfe2-43f5-829b-cd97ad8f4979 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.070541868Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=af831573-bfe2-43f5-829b-cd97ad8f4979 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.071260418Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-0/alertmanager-proxy" id=d23c3c58-1d5a-49a7-8abd-64967be44510 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.071370883Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.080071009Z" level=info msg="Created container 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb: openshift-monitoring/prometheus-k8s-0/init-config-reloader" id=38b170d8-cfe7-4780-855e-c57d6d896f26 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.080556270Z" level=info msg="Starting container: 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" id=cd0b1723-9d6f-496d-bda7-74fab746a232 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: Started crio-conmon-576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36.scope. Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.093161094Z" level=info msg="Started container" PID=6093 containerID=57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb description=openshift-monitoring/prometheus-k8s-0/init-config-reloader id=cd0b1723-9d6f-496d-bda7-74fab746a232 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: Started libcontainer container 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36. Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: crio-57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb.scope: Succeeded. Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: crio-57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb.scope: Consumed 23ms CPU time Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: crio-conmon-57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb.scope: Succeeded. Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: crio-conmon-57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb.scope: Consumed 19ms CPU time Feb 23 15:44:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:05.153603 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager/0.log" Feb 23 15:44:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:05.153682 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerStarted Data:9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565} Feb 23 15:44:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:05.154886 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-675d948766-44b26" event=&{ID:38f8ec67-c68b-4783-9d06-95eb33506398 Type:ContainerStarted Data:94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e} Feb 23 15:44:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:05.154913 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-675d948766-44b26" event=&{ID:38f8ec67-c68b-4783-9d06-95eb33506398 Type:ContainerStarted Data:29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152} Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.155276856Z" level=info msg="Created container 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36: openshift-monitoring/alertmanager-main-0/alertmanager-proxy" id=d23c3c58-1d5a-49a7-8abd-64967be44510 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.155567188Z" level=info msg="Starting container: 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36" id=7e3b87dc-6bbd-4cb0-ad60-60abb568e3a2 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:05.155762 2125 generic.go:296] "Generic (PLEG): container finished" podID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" exitCode=0 Feb 23 15:44:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:05.155790 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerDied Data:57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb} Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.156300614Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434" id=ebefa7f9-1f40-4b6f-8291-b7d53261177c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.156453096Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434 not found" id=ebefa7f9-1f40-4b6f-8291-b7d53261177c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.156899340Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434" id=f4f5a12c-71c3-42b9-b3d9-615b145ae71c name=/runtime.v1.ImageService/PullImage Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.157763217Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434\"" Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.161514792Z" level=info msg="Started container" PID=6158 containerID=576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36 description=openshift-monitoring/alertmanager-main-0/alertmanager-proxy id=7e3b87dc-6bbd-4cb0-ad60-60abb568e3a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.169574834Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=93d74593-3ffe-4534-ac1d-ec8519934086 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.169762279Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=93d74593-3ffe-4534-ac1d-ec8519934086 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.170354329Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=015f8a07-004b-4db8-b782-33505ffb4066 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.170489090Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=015f8a07-004b-4db8-b782-33505ffb4066 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.171672590Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-0/kube-rbac-proxy" id=f110c6e0-4f26-4109-8f88-1db7b68d7628 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.171773023Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: Started crio-conmon-079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c.scope. Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: Started libcontainer container 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c. Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: run-runc-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3-runc.TyQ1ao.mount: Succeeded. Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.265010868Z" level=info msg="Created container 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c: openshift-monitoring/alertmanager-main-0/kube-rbac-proxy" id=f110c6e0-4f26-4109-8f88-1db7b68d7628 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.265411414Z" level=info msg="Starting container: 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c" id=4d01dd7d-afaf-434b-a9df-0cf3c353b770 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.270901886Z" level=info msg="Started container" PID=6225 containerID=079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c description=openshift-monitoring/alertmanager-main-0/kube-rbac-proxy id=4d01dd7d-afaf-434b-a9df-0cf3c353b770 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.281968926Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=660cc336-d5b7-401c-96a6-473d1d110f48 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.282141077Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=660cc336-d5b7-401c-96a6-473d1d110f48 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.289042816Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=2186e8bb-24a8-454e-90a5-dc3357ad7355 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.289220560Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2186e8bb-24a8-454e-90a5-dc3357ad7355 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.290035957Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-0/kube-rbac-proxy-metric" id=63ebf167-fd63-4bbf-b75b-b31e5abdc57b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.290135273Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: Started crio-conmon-cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74.scope. Feb 23 15:44:05 ip-10-0-136-68 systemd[1]: Started libcontainer container cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74. Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.381110289Z" level=info msg="Created container cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74: openshift-monitoring/alertmanager-main-0/kube-rbac-proxy-metric" id=63ebf167-fd63-4bbf-b75b-b31e5abdc57b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.381491357Z" level=info msg="Starting container: cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74" id=546fc263-25e0-4d5a-988e-539ffe3ea595 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.386519816Z" level=info msg="Started container" PID=6271 containerID=cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74 description=openshift-monitoring/alertmanager-main-0/kube-rbac-proxy-metric id=546fc263-25e0-4d5a-988e-539ffe3ea595 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.396092962Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=efb394be-b6b8-4ee4-8f20-ce663078f002 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.396245970Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed not found" id=efb394be-b6b8-4ee4-8f20-ce663078f002 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.396826573Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=e89df325-d451-4247-979e-2e702b20a100 name=/runtime.v1.ImageService/PullImage Feb 23 15:44:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:05.397792882Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed\"" Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.054985418Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434\"" Feb 23 15:44:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:06.159479 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager/0.log" Feb 23 15:44:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:06.159539 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerStarted Data:cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74} Feb 23 15:44:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:06.159560 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerStarted Data:079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c} Feb 23 15:44:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:06.159573 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerStarted Data:576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36} Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.279181413Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed\"" Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.665114250Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=aa6c84bc-87e7-498d-9762-40aa6f6a8cee name=/runtime.v1.ImageService/PullImage Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.665819914Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=75769e97-e0a6-41b8-9d32-3dfb181357ec name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.666547952Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=75769e97-e0a6-41b8-9d32-3dfb181357ec name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.667111132Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-892l6/prom-label-proxy" id=47491bc4-7fcc-443a-be4f-378a74939ff8 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.667181637Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:06 ip-10-0-136-68 systemd[1]: Started crio-conmon-6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79.scope. Feb 23 15:44:06 ip-10-0-136-68 systemd[1]: run-runc-6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79-runc.2S0t9F.mount: Succeeded. Feb 23 15:44:06 ip-10-0-136-68 systemd[1]: Started libcontainer container 6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79. Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.802095410Z" level=info msg="Created container 6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79: openshift-monitoring/thanos-querier-8654d9f96d-892l6/prom-label-proxy" id=47491bc4-7fcc-443a-be4f-378a74939ff8 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.802490092Z" level=info msg="Starting container: 6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79" id=8a4df074-b4e3-467f-8fc4-6d54d4350638 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.820508239Z" level=info msg="Started container" PID=6333 containerID=6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79 description=openshift-monitoring/thanos-querier-8654d9f96d-892l6/prom-label-proxy id=8a4df074-b4e3-467f-8fc4-6d54d4350638 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.829308747Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=76dea290-aa68-40c7-8023-68421e9786b8 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.829487417Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=76dea290-aa68-40c7-8023-68421e9786b8 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.830035368Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=b5d86705-e22e-4e79-842f-80c2144135ca name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.830157755Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b5d86705-e22e-4e79-842f-80c2144135ca name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.830834270Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-rules" id=73883808-4544-453f-90a4-e76344b07e03 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.830915130Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:06 ip-10-0-136-68 systemd[1]: Started crio-conmon-ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833.scope. Feb 23 15:44:06 ip-10-0-136-68 systemd[1]: Started libcontainer container ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833. Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.940006331Z" level=info msg="Created container ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-rules" id=73883808-4544-453f-90a4-e76344b07e03 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.940390499Z" level=info msg="Starting container: ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833" id=e6ff8567-a18d-4f8e-a7a9-b452ec42368d name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.945755792Z" level=info msg="Started container" PID=6377 containerID=ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833 description=openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-rules id=e6ff8567-a18d-4f8e-a7a9-b452ec42368d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.953367855Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=985b87fa-06ec-4e7a-a3fc-f5d916f36f4c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.953512496Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=985b87fa-06ec-4e7a-a3fc-f5d916f36f4c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.954051128Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=08ac0a60-109d-4c99-8b30-ff0c47feb926 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.954200154Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=08ac0a60-109d-4c99-8b30-ff0c47feb926 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.954868090Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-metrics" id=575cfc95-d694-438a-89cd-fea1823659a5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:06.954965255Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:06 ip-10-0-136-68 systemd[1]: Started crio-conmon-a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec.scope. Feb 23 15:44:06 ip-10-0-136-68 systemd[1]: Started libcontainer container a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec. Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.065392183Z" level=info msg="Created container a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-metrics" id=575cfc95-d694-438a-89cd-fea1823659a5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.065775657Z" level=info msg="Starting container: a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec" id=bb7620fd-9da3-41ea-a7f6-0230b6dfce0c name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.071253649Z" level=info msg="Started container" PID=6422 containerID=a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec description=openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-metrics id=bb7620fd-9da3-41ea-a7f6-0230b6dfce0c name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 Feb 23 15:44:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:07.163214 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerStarted Data:a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec} Feb 23 15:44:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:07.163241 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerStarted Data:ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833} Feb 23 15:44:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:07.163249 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerStarted Data:6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79} Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.487762862Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=e89df325-d451-4247-979e-2e702b20a100 name=/runtime.v1.ImageService/PullImage Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.488380303Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=00aca473-ebb6-47b6-bf82-8029ec481710 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.488510708Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=00aca473-ebb6-47b6-bf82-8029ec481710 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.489113720Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-0/prom-label-proxy" id=82f171b3-d134-44dc-a74e-8141c5b65a9f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.489207462Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:07 ip-10-0-136-68 systemd[1]: Started crio-conmon-7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319.scope. Feb 23 15:44:07 ip-10-0-136-68 systemd[1]: Started libcontainer container 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319. Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.576544435Z" level=info msg="Created container 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319: openshift-monitoring/alertmanager-main-0/prom-label-proxy" id=82f171b3-d134-44dc-a74e-8141c5b65a9f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.577079224Z" level=info msg="Starting container: 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319" id=85023431-8f12-4013-9f09-86267c5e16b4 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:07.583189307Z" level=info msg="Started container" PID=6474 containerID=7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319 description=openshift-monitoring/alertmanager-main-0/prom-label-proxy id=85023431-8f12-4013-9f09-86267c5e16b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be Feb 23 15:44:07 ip-10-0-136-68 systemd[1]: run-runc-ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833-runc.ZZapvm.mount: Succeeded. Feb 23 15:44:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:08.171706 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager/0.log" Feb 23 15:44:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:08.172058 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerStarted Data:7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319} Feb 23 15:44:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:08.172067 2125 scope.go:115] "RemoveContainer" containerID="52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c" Feb 23 15:44:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:08.172084 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.173765287Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=00912925-c46c-4c97-929e-70b7707fe33e name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.173968175Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=00912925-c46c-4c97-929e-70b7707fe33e name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.176042422Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=4e27c122-534a-4e68-b8b6-2b18c038f342 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.176204968Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4e27c122-534a-4e68-b8b6-2b18c038f342 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.178213856Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-0/alertmanager" id=cae1f37c-faf7-474c-a8c3-81ae08eec89f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.178329852Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:08.179986 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" Feb 23 15:44:08 ip-10-0-136-68 systemd[1]: Started crio-conmon-c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055.scope. Feb 23 15:44:08 ip-10-0-136-68 systemd[1]: Started libcontainer container c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055. Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.311491575Z" level=info msg="Created container c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055: openshift-monitoring/alertmanager-main-0/alertmanager" id=cae1f37c-faf7-474c-a8c3-81ae08eec89f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.311882089Z" level=info msg="Starting container: c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055" id=2511a3d9-6ad1-497c-bc62-18200eb04490 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:08.319630812Z" level=info msg="Started container" PID=6528 containerID=c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055 description=openshift-monitoring/alertmanager-main-0/alertmanager id=2511a3d9-6ad1-497c-bc62-18200eb04490 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be Feb 23 15:44:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:08.368888 2125 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:44:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:08.368941 2125 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:44:09 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:09.179769 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager/0.log" Feb 23 15:44:09 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:09.179903 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerStarted Data:c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055} Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.325758704Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434" id=f4f5a12c-71c3-42b9-b3d9-615b145ae71c name=/runtime.v1.ImageService/PullImage Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.329315870Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434" id=6ec5fd84-0e26-4a41-99ad-13134dd1611a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.330241609Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7d6a7a794d1c53f9801c5c0cd31acc0bbeac302f72326d692b09c25b56dec99d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434],Size_:466962930,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6ec5fd84-0e26-4a41-99ad-13134dd1611a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.331081751Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/prometheus" id=069a3a20-d75f-49c6-8533-e3bd7ba801c7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.331156521Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: Started crio-conmon-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5.scope. Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: run-runc-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5-runc.VQw6L1.mount: Succeeded. Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: Started libcontainer container 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5. Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.453631789Z" level=info msg="Created container 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5: openshift-monitoring/prometheus-k8s-0/prometheus" id=069a3a20-d75f-49c6-8533-e3bd7ba801c7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.454034057Z" level=info msg="Starting container: 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" id=37565f96-b736-49a4-adcb-a3ac3bdc843a name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.471016631Z" level=info msg="Started container" PID=6591 containerID=711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 description=openshift-monitoring/prometheus-k8s-0/prometheus id=37565f96-b736-49a4-adcb-a3ac3bdc843a name=/runtime.v1.RuntimeService/StartContainer sandboxID=e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.490129498Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=affae813-4a0c-4893-b3ff-725bbe97e670 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.490320547Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=affae813-4a0c-4893-b3ff-725bbe97e670 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.490871682Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=f91faf0e-a6b2-49d5-9da9-cb5ccca7643a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.491026009Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f91faf0e-a6b2-49d5-9da9-cb5ccca7643a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.491720118Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/config-reloader" id=70b73075-5116-407f-8677-6bd3d3126478 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.491822979Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: Started crio-conmon-c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d.scope. Feb 23 15:44:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00102|connmgr|INFO|br-ex<->unix#34: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: Started libcontainer container c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d. Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.650927253Z" level=info msg="Created container c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d: openshift-monitoring/prometheus-k8s-0/config-reloader" id=70b73075-5116-407f-8677-6bd3d3126478 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.651335359Z" level=info msg="Starting container: c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" id=d731ee9f-de40-456d-b086-4768bb1bea47 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.664576759Z" level=info msg="Started container" PID=6640 containerID=c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d description=openshift-monitoring/prometheus-k8s-0/config-reloader id=d731ee9f-de40-456d-b086-4768bb1bea47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.683421061Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=4fbedec6-0a49-4baa-8487-3a14d9f0a283 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.683629170Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4fbedec6-0a49-4baa-8487-3a14d9f0a283 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.685813783Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=e9f49fcf-3ceb-423c-b116-333fd22e7259 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.685996654Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e9f49fcf-3ceb-423c-b116-333fd22e7259 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.686846093Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/thanos-sidecar" id=d7dbf0b5-6086-42f2-bfbd-ad9377f15e10 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.686951051Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: Started crio-conmon-901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957.scope. Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: Started libcontainer container 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957. Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.827346414Z" level=info msg="Created container 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957: openshift-monitoring/prometheus-k8s-0/thanos-sidecar" id=d7dbf0b5-6086-42f2-bfbd-ad9377f15e10 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.827791686Z" level=info msg="Starting container: 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" id=b7ef95fc-5c22-47c0-af67-0eaf0623db41 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.848818392Z" level=info msg="Started container" PID=6684 containerID=901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 description=openshift-monitoring/prometheus-k8s-0/thanos-sidecar id=b7ef95fc-5c22-47c0-af67-0eaf0623db41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.862489857Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=1fc36f79-0e69-4c49-b307-cd220725d2cd name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.862679679Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1fc36f79-0e69-4c49-b307-cd220725d2cd name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.863356327Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=cd3b7120-34c3-4e3b-b46d-1d494151c1ea name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.863502011Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=cd3b7120-34c3-4e3b-b46d-1d494151c1ea name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.864328968Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/prometheus-proxy" id=2e6639bd-4354-4446-9f26-e8f15e241762 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:09 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:09.864430584Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: Started crio-conmon-4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca.scope. Feb 23 15:44:09 ip-10-0-136-68 systemd[1]: Started libcontainer container 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca. Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.015943798Z" level=info msg="Created container 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca: openshift-monitoring/prometheus-k8s-0/prometheus-proxy" id=2e6639bd-4354-4446-9f26-e8f15e241762 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.016605616Z" level=info msg="Starting container: 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" id=21f74460-eae5-4472-a330-a39330571500 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.025059914Z" level=info msg="Started container" PID=6752 containerID=4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca description=openshift-monitoring/prometheus-k8s-0/prometheus-proxy id=21f74460-eae5-4472-a330-a39330571500 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.037052300Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=ef19ab78-7eca-4fcb-9bf5-518bab301491 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.037201391Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ef19ab78-7eca-4fcb-9bf5-518bab301491 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.037880505Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=8bfa347a-f1c1-42d6-a444-8e2ce2ad6a97 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.037989560Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8bfa347a-f1c1-42d6-a444-8e2ce2ad6a97 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.038641878Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy" id=e42c5a7d-33d7-437b-a2fc-9a00dad1c63d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.038744685Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:10 ip-10-0-136-68 systemd[1]: Started crio-conmon-dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489.scope. Feb 23 15:44:10 ip-10-0-136-68 systemd[1]: Started libcontainer container dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489. Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.122046055Z" level=info msg="Created container dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy" id=e42c5a7d-33d7-437b-a2fc-9a00dad1c63d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.122400719Z" level=info msg="Starting container: dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" id=1b5a04fd-03dc-4874-9158-b3d9608c3744 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.127955769Z" level=info msg="Started container" PID=6796 containerID=dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 description=openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy id=1b5a04fd-03dc-4874-9158-b3d9608c3744 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.134276551Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=b7e3f20d-0fb6-4941-8362-3b3e2f923bc0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.134449359Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b7e3f20d-0fb6-4941-8362-3b3e2f923bc0 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.134937957Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=b8f685c9-37fc-4457-a562-c2929fc948de name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.135066889Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b8f685c9-37fc-4457-a562-c2929fc948de name=/runtime.v1.ImageService/ImageStatus Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.136401246Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos" id=411574a7-50e9-4a86-98fa-480677466e2e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.136494957Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:44:10 ip-10-0-136-68 systemd[1]: Started crio-conmon-1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4.scope. Feb 23 15:44:10 ip-10-0-136-68 systemd[1]: Started libcontainer container 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4. Feb 23 15:44:10 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:10.183672 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerStarted Data:dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489} Feb 23 15:44:10 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:10.183702 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerStarted Data:4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca} Feb 23 15:44:10 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:10.183716 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerStarted Data:901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957} Feb 23 15:44:10 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:10.183737 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerStarted Data:c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d} Feb 23 15:44:10 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:10.183750 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerStarted Data:711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5} Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.223154940Z" level=info msg="Created container 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos" id=411574a7-50e9-4a86-98fa-480677466e2e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.223481731Z" level=info msg="Starting container: 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" id=93bc82b6-9650-471f-bdcd-805f710a8cac name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:44:10 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:44:10.229588848Z" level=info msg="Started container" PID=6840 containerID=1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 description=openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos id=93bc82b6-9650-471f-bdcd-805f710a8cac name=/runtime.v1.RuntimeService/StartContainer sandboxID=e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 Feb 23 15:44:11 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:11.192735 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerStarted Data:1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4} Feb 23 15:44:11 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:11.408930 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 15:44:12 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:12.277928 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:14 ip-10-0-136-68 systemd[1]: run-runc-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3-runc.Knlu3H.mount: Succeeded. Feb 23 15:44:15 ip-10-0-136-68 rpm-ostree[2350]: In idle state; will auto-exit in 60 seconds Feb 23 15:44:15 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Succeeded. Feb 23 15:44:15 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Consumed 1.681s CPU time Feb 23 15:44:17 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:17.277491 2125 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:17 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:17.352386 2125 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:18.244704 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:44:18 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:18.368973 2125 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:44:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00103|connmgr|INFO|br-ex<->unix#42: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:44:27 ip-10-0-136-68 systemd[1]: run-runc-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5-runc.yFNaWj.mount: Succeeded. Feb 23 15:44:28 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:44:28.408720 2125 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/alertmanager-main-0" Feb 23 15:44:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00104|connmgr|INFO|br-int<->unix#2: 867 flow_mods in the 56 s starting 59 s ago (665 adds, 202 deletes) Feb 23 15:44:32 ip-10-0-136-68 systemd[1]: run-runc-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5-runc.S4gbEN.mount: Succeeded. Feb 23 15:44:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00105|connmgr|INFO|br-ex<->unix#46: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:44:47 ip-10-0-136-68 systemd[1]: run-runc-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5-runc.LyZxqc.mount: Succeeded. Feb 23 15:44:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00106|connmgr|INFO|br-ex<->unix#55: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:45:02 ip-10-0-136-68 systemd[1]: run-runc-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5-runc.XRMb2v.mount: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:04.608615 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:45:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:04.609464 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerName="prometheus" containerID="cri-o://711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" gracePeriod=600 Feb 23 15:45:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:04.609993 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerName="kube-rbac-proxy-thanos" containerID="cri-o://1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" gracePeriod=600 Feb 23 15:45:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:04.610129 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerName="kube-rbac-proxy" containerID="cri-o://dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" gracePeriod=600 Feb 23 15:45:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:04.610176 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerName="prometheus-proxy" containerID="cri-o://4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" gracePeriod=600 Feb 23 15:45:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:04.610209 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerName="thanos-sidecar" containerID="cri-o://901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" gracePeriod=600 Feb 23 15:45:04 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:04.610242 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerName="config-reloader" containerID="cri-o://c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" gracePeriod=600 Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.610700778Z" level=info msg="Stopping container: c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d (timeout: 600s)" id=160a72cd-69e3-44d1-8b5e-984f0b058b41 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.610945948Z" level=info msg="Stopping container: 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 (timeout: 600s)" id=ce9c4095-a1ed-49e5-9d10-b641a55fcf69 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.610968699Z" level=info msg="Stopping container: dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 (timeout: 600s)" id=639bb3d8-5f8a-44c4-8bc4-c6047810adea name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.610956367Z" level=info msg="Stopping container: 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca (timeout: 600s)" id=25361cb0-6c84-49e0-9dbd-dbcb63cafb60 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.611097330Z" level=info msg="Stopping container: 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 (timeout: 600s)" id=b82d2b3b-7211-44ab-9622-075876f56dc5 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.611129952Z" level=info msg="Stopping container: 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 (timeout: 600s)" id=db419934-1d2c-48cb-872d-b8823bc8f02a name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 conmon[6625]: conmon c35ca486c77b849659ff : container 6640 exited with status 2 Feb 23 15:45:04 ip-10-0-136-68 conmon[6739]: conmon 4fab39e9c0d935f44183 : container 6752 exited with status 2 Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca.scope: Consumed 115ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d.scope: Consumed 20ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca.scope: Consumed 20ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d.scope: Consumed 58ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957.scope: Consumed 143ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957.scope: Consumed 19ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5.scope: Consumed 8.507s CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5.scope: Consumed 22ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4.scope: Consumed 59ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4.scope: Consumed 18ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489.scope: Consumed 64ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489.scope: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: crio-conmon-dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489.scope: Consumed 18ms CPU time Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-67240926a3c602002772f3224ec5bec0dc6105574e242b7be40f55feba276526-merged.mount: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-07aafa9858c620c90f07b6ff82b04e2681e6ada748f2325a7969e3d3710fd3c3-merged.mount: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.782148964Z" level=info msg="Stopped container 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca: openshift-monitoring/prometheus-k8s-0/prometheus-proxy" id=25361cb0-6c84-49e0-9dbd-dbcb63cafb60 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a0391e4fa164b7a25bbff61681982f8ad05e84d425762ff8d0f55da41ffa26c0-merged.mount: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.788729531Z" level=info msg="Stopped container 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos" id=ce9c4095-a1ed-49e5-9d10-b641a55fcf69 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.792061110Z" level=info msg="Stopped container c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d: openshift-monitoring/prometheus-k8s-0/config-reloader" id=160a72cd-69e3-44d1-8b5e-984f0b058b41 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-82259f9efa5fe23e1daa86f77cb6ed55ba3e77ace6f4c285b821c6b47e01671e-merged.mount: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.803931468Z" level=info msg="Stopped container dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy" id=639bb3d8-5f8a-44c4-8bc4-c6047810adea name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ec34290eb13dc07c7aa3f957022dc76237ff8b4e358fa68b7fa44ec9755b1566-merged.mount: Succeeded. Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.809379481Z" level=info msg="Stopped container 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5: openshift-monitoring/prometheus-k8s-0/prometheus" id=db419934-1d2c-48cb-872d-b8823bc8f02a name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.816231218Z" level=info msg="Stopped container 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957: openshift-monitoring/prometheus-k8s-0/thanos-sidecar" id=b82d2b3b-7211-44ab-9622-075876f56dc5 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.816603293Z" level=info msg="Stopping pod sandbox: e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918" id=b4461647-fea7-4aef-ad77-b49028030a6a name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.816745197Z" level=info msg="Got pod network &{Name:prometheus-k8s-0 Namespace:openshift-monitoring ID:e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918 UID:6dabd947-ddab-4fdb-9e78-cb27f3551554 NetNS:/var/run/netns/f75b780b-87af-4e85-b9a2-72004867fd14 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:45:04 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:04.816845602Z" level=info msg="Deleting pod openshift-monitoring_prometheus-k8s-0 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:45:04 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00107|bridge|INFO|bridge br-int: deleted interface e274954033b5b31 on port 17 Feb 23 15:45:04 ip-10-0-136-68 kernel: device e274954033b5b31 left promiscuous mode Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: 2023-02-23T15:45:04Z [verbose] Del: openshift-monitoring:prometheus-k8s-0:6dabd947-ddab-4fdb-9e78-cb27f3551554:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: I0223 15:45:04.945969 8136 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.218434292Z" level=info msg="Stopped pod sandbox: e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918" id=b4461647-fea7-4aef-ad77-b49028030a6a name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.224136 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6dabd947-ddab-4fdb-9e78-cb27f3551554/prometheus-proxy/0.log" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.224606 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6dabd947-ddab-4fdb-9e78-cb27f3551554/config-reloader/0.log" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.295965 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-serving-certs-ca-bundle\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.295999 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-metrics-client-ca\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296027 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-trusted-ca-bundle\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296050 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-proxy\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296081 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-config\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296107 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-web-config\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296137 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-etcd-client-certs\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296162 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-tls\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296188 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-rulefiles-0\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296217 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:05.296204 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes/kubernetes.io~configmap/configmap-metrics-client-ca: clearQuota called, but quotas disabled Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:05.296225 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes/kubernetes.io~configmap/configmap-serving-certs-ca-bundle: clearQuota called, but quotas disabled Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296401 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296418 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296247 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-rbac-proxy\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296483 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-kubelet-serving-ca-bundle\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296512 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-config-out\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296544 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-db\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296572 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-grpc-tls\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296611 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-metrics-client-certs\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:05.296621 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes/kubernetes.io~configmap/prometheus-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296644 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-tls-assets\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296670 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-metrics-client-ca\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296701 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl2lv\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-kube-api-access-vl2lv\") pod \"6dabd947-ddab-4fdb-9e78-cb27f3551554\" (UID: \"6dabd947-ddab-4fdb-9e78-cb27f3551554\") " Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296780 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296828 2125 reconciler.go:399] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-serving-certs-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.296855 2125 reconciler.go:399] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:05.296950 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes/kubernetes.io~configmap/configmap-kubelet-serving-ca-bundle: clearQuota called, but quotas disabled Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:05.296994 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes/kubernetes.io~configmap/prometheus-k8s-rulefiles-0: clearQuota called, but quotas disabled Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.297075 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:05.297150 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes/kubernetes.io~empty-dir/config-out: clearQuota called, but quotas disabled Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.297254 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-config-out" (OuterVolumeSpecName: "config-out") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:05.297335 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:05.297389 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes/kubernetes.io~empty-dir/prometheus-k8s-db: clearQuota called, but quotas disabled Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.297467 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.298620 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.298699 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.303661 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.303717 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-etcd-client-certs" (OuterVolumeSpecName: "secret-kube-etcd-client-certs") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "secret-kube-etcd-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.304186 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.305522 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-proxy" (OuterVolumeSpecName: "secret-prometheus-k8s-proxy") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "secret-prometheus-k8s-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.305560 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-config" (OuterVolumeSpecName: "config") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.305704 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.308209 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6dabd947-ddab-4fdb-9e78-cb27f3551554/prometheus-proxy/0.log" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.308605 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-kube-api-access-vl2lv" (OuterVolumeSpecName: "kube-api-access-vl2lv") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "kube-api-access-vl2lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.308618 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.308937 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6dabd947-ddab-4fdb-9e78-cb27f3551554/config-reloader/0.log" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309346 2125 generic.go:296] "Generic (PLEG): container finished" podID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" exitCode=0 Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309367 2125 generic.go:296] "Generic (PLEG): container finished" podID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" exitCode=0 Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309379 2125 generic.go:296] "Generic (PLEG): container finished" podID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" exitCode=2 Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309390 2125 generic.go:296] "Generic (PLEG): container finished" podID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" exitCode=0 Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309402 2125 generic.go:296] "Generic (PLEG): container finished" podID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" exitCode=2 Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309414 2125 generic.go:296] "Generic (PLEG): container finished" podID=6dabd947-ddab-4fdb-9e78-cb27f3551554 containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" exitCode=0 Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309437 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerDied Data:1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4} Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309462 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerDied Data:dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489} Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309477 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerDied Data:4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca} Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309491 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerDied Data:901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957} Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309504 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerDied Data:c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d} Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309518 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerDied Data:711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5} Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309533 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:6dabd947-ddab-4fdb-9e78-cb27f3551554 Type:ContainerDied Data:e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918} Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.309563 2125 scope.go:115] "RemoveContainer" containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.310239633Z" level=info msg="Removing container: 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" id=fcaf0bb9-486e-4379-9c1c-739bec98bb75 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.310638 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.310653 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.315678 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-web-config" (OuterVolumeSpecName: "web-config") pod "6dabd947-ddab-4fdb-9e78-cb27f3551554" (UID: "6dabd947-ddab-4fdb-9e78-cb27f3551554"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.326922399Z" level=info msg="Removed container 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos" id=fcaf0bb9-486e-4379-9c1c-739bec98bb75 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.327159 2125 scope.go:115] "RemoveContainer" containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.327777562Z" level=info msg="Removing container: dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" id=73354c83-d68f-481c-8de6-2ec315ec5d35 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.344144550Z" level=info msg="Removed container dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy" id=73354c83-d68f-481c-8de6-2ec315ec5d35 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.344306 2125 scope.go:115] "RemoveContainer" containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.344928938Z" level=info msg="Removing container: 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" id=13c3779e-38ed-429f-bb48-3a5161f06aa3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.359798864Z" level=info msg="Removed container 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca: openshift-monitoring/prometheus-k8s-0/prometheus-proxy" id=13c3779e-38ed-429f-bb48-3a5161f06aa3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.359967 2125 scope.go:115] "RemoveContainer" containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.360584512Z" level=info msg="Removing container: 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" id=8b94bd73-e99e-4eb9-8568-e56f4eeea1ac name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.374925633Z" level=info msg="Removed container 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957: openshift-monitoring/prometheus-k8s-0/thanos-sidecar" id=8b94bd73-e99e-4eb9-8568-e56f4eeea1ac name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.375091 2125 scope.go:115] "RemoveContainer" containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.375760030Z" level=info msg="Removing container: c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" id=b460c8e3-69e5-4cbe-81b3-9309e5233750 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.389754048Z" level=info msg="Removed container c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d: openshift-monitoring/prometheus-k8s-0/config-reloader" id=b460c8e3-69e5-4cbe-81b3-9309e5233750 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.389936 2125 scope.go:115] "RemoveContainer" containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.390643466Z" level=info msg="Removing container: 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" id=56052bae-ec15-4c67-9b12-e3d3691da6b9 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397390 2125 reconciler.go:399] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-rulefiles-0\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397418 2125 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397434 2125 reconciler.go:399] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397450 2125 reconciler.go:399] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-config-out\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397464 2125 reconciler.go:399] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-k8s-db\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397478 2125 reconciler.go:399] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-configmap-kubelet-serving-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397493 2125 reconciler.go:399] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-grpc-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397509 2125 reconciler.go:399] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-metrics-client-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397524 2125 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397539 2125 reconciler.go:399] "Volume detached for volume \"kube-api-access-vl2lv\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-kube-api-access-vl2lv\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397553 2125 reconciler.go:399] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6dabd947-ddab-4fdb-9e78-cb27f3551554-tls-assets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397570 2125 reconciler.go:399] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dabd947-ddab-4fdb-9e78-cb27f3551554-prometheus-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397584 2125 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397600 2125 reconciler.go:399] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397614 2125 reconciler.go:399] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-web-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397631 2125 reconciler.go:399] "Volume detached for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-kube-etcd-client-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.397646 2125 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6dabd947-ddab-4fdb-9e78-cb27f3551554-secret-prometheus-k8s-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.408437847Z" level=info msg="Removed container 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5: openshift-monitoring/prometheus-k8s-0/prometheus" id=56052bae-ec15-4c67-9b12-e3d3691da6b9 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.408633 2125 scope.go:115] "RemoveContainer" containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.409277948Z" level=info msg="Removing container: 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" id=60bf3972-06f5-420b-9ab5-a335c659a5c4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:05.430690777Z" level=info msg="Removed container 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb: openshift-monitoring/prometheus-k8s-0/init-config-reloader" id=60bf3972-06f5-420b-9ab5-a335c659a5c4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.430886 2125 scope.go:115] "RemoveContainer" containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.431156 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": container with ID starting with 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 not found: ID does not exist" containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.431185 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4} err="failed to get container status \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": rpc error: code = NotFound desc = could not find container \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": container with ID starting with 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.431192 2125 scope.go:115] "RemoveContainer" containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.431426 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": container with ID starting with dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 not found: ID does not exist" containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.431475 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489} err="failed to get container status \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": rpc error: code = NotFound desc = could not find container \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": container with ID starting with dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.431498 2125 scope.go:115] "RemoveContainer" containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.431663 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": container with ID starting with 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca not found: ID does not exist" containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.431688 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca} err="failed to get container status \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": rpc error: code = NotFound desc = could not find container \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": container with ID starting with 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.431698 2125 scope.go:115] "RemoveContainer" containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.431868 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": container with ID starting with 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 not found: ID does not exist" containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.431893 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957} err="failed to get container status \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": rpc error: code = NotFound desc = could not find container \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": container with ID starting with 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.431901 2125 scope.go:115] "RemoveContainer" containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.432115 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": container with ID starting with c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d not found: ID does not exist" containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432132 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d} err="failed to get container status \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": rpc error: code = NotFound desc = could not find container \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": container with ID starting with c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432139 2125 scope.go:115] "RemoveContainer" containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.432323 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": container with ID starting with 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 not found: ID does not exist" containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432345 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5} err="failed to get container status \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": rpc error: code = NotFound desc = could not find container \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": container with ID starting with 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432354 2125 scope.go:115] "RemoveContainer" containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.432523 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": container with ID starting with 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb not found: ID does not exist" containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432550 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb} err="failed to get container status \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": rpc error: code = NotFound desc = could not find container \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": container with ID starting with 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432559 2125 scope.go:115] "RemoveContainer" containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432706 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4} err="failed to get container status \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": rpc error: code = NotFound desc = could not find container \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": container with ID starting with 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432721 2125 scope.go:115] "RemoveContainer" containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432898 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489} err="failed to get container status \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": rpc error: code = NotFound desc = could not find container \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": container with ID starting with dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.432911 2125 scope.go:115] "RemoveContainer" containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433054 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca} err="failed to get container status \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": rpc error: code = NotFound desc = could not find container \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": container with ID starting with 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433069 2125 scope.go:115] "RemoveContainer" containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433206 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957} err="failed to get container status \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": rpc error: code = NotFound desc = could not find container \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": container with ID starting with 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433224 2125 scope.go:115] "RemoveContainer" containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433419 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d} err="failed to get container status \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": rpc error: code = NotFound desc = could not find container \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": container with ID starting with c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433435 2125 scope.go:115] "RemoveContainer" containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433590 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5} err="failed to get container status \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": rpc error: code = NotFound desc = could not find container \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": container with ID starting with 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433604 2125 scope.go:115] "RemoveContainer" containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433742 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb} err="failed to get container status \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": rpc error: code = NotFound desc = could not find container \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": container with ID starting with 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433757 2125 scope.go:115] "RemoveContainer" containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433886 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4} err="failed to get container status \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": rpc error: code = NotFound desc = could not find container \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": container with ID starting with 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.433899 2125 scope.go:115] "RemoveContainer" containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434033 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489} err="failed to get container status \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": rpc error: code = NotFound desc = could not find container \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": container with ID starting with dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434056 2125 scope.go:115] "RemoveContainer" containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434221 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca} err="failed to get container status \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": rpc error: code = NotFound desc = could not find container \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": container with ID starting with 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434233 2125 scope.go:115] "RemoveContainer" containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434381 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957} err="failed to get container status \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": rpc error: code = NotFound desc = could not find container \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": container with ID starting with 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434392 2125 scope.go:115] "RemoveContainer" containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434547 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d} err="failed to get container status \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": rpc error: code = NotFound desc = could not find container \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": container with ID starting with c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434563 2125 scope.go:115] "RemoveContainer" containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434700 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5} err="failed to get container status \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": rpc error: code = NotFound desc = could not find container \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": container with ID starting with 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434715 2125 scope.go:115] "RemoveContainer" containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434845 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb} err="failed to get container status \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": rpc error: code = NotFound desc = could not find container \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": container with ID starting with 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.434859 2125 scope.go:115] "RemoveContainer" containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435030 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4} err="failed to get container status \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": rpc error: code = NotFound desc = could not find container \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": container with ID starting with 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435044 2125 scope.go:115] "RemoveContainer" containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435178 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489} err="failed to get container status \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": rpc error: code = NotFound desc = could not find container \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": container with ID starting with dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435192 2125 scope.go:115] "RemoveContainer" containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435345 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca} err="failed to get container status \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": rpc error: code = NotFound desc = could not find container \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": container with ID starting with 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435359 2125 scope.go:115] "RemoveContainer" containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435527 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957} err="failed to get container status \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": rpc error: code = NotFound desc = could not find container \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": container with ID starting with 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435542 2125 scope.go:115] "RemoveContainer" containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435714 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d} err="failed to get container status \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": rpc error: code = NotFound desc = could not find container \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": container with ID starting with c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435729 2125 scope.go:115] "RemoveContainer" containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435859 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5} err="failed to get container status \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": rpc error: code = NotFound desc = could not find container \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": container with ID starting with 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.435877 2125 scope.go:115] "RemoveContainer" containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436004 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb} err="failed to get container status \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": rpc error: code = NotFound desc = could not find container \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": container with ID starting with 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436015 2125 scope.go:115] "RemoveContainer" containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436158 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4} err="failed to get container status \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": rpc error: code = NotFound desc = could not find container \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": container with ID starting with 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436172 2125 scope.go:115] "RemoveContainer" containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436338 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489} err="failed to get container status \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": rpc error: code = NotFound desc = could not find container \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": container with ID starting with dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436350 2125 scope.go:115] "RemoveContainer" containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436508 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca} err="failed to get container status \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": rpc error: code = NotFound desc = could not find container \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": container with ID starting with 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436519 2125 scope.go:115] "RemoveContainer" containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436680 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957} err="failed to get container status \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": rpc error: code = NotFound desc = could not find container \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": container with ID starting with 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436695 2125 scope.go:115] "RemoveContainer" containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436861 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d} err="failed to get container status \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": rpc error: code = NotFound desc = could not find container \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": container with ID starting with c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.436875 2125 scope.go:115] "RemoveContainer" containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437050 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5} err="failed to get container status \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": rpc error: code = NotFound desc = could not find container \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": container with ID starting with 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437064 2125 scope.go:115] "RemoveContainer" containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437234 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb} err="failed to get container status \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": rpc error: code = NotFound desc = could not find container \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": container with ID starting with 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437246 2125 scope.go:115] "RemoveContainer" containerID="1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437406 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4} err="failed to get container status \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": rpc error: code = NotFound desc = could not find container \"1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4\": container with ID starting with 1348f91f45926830378fe0ce3a4ff7f09832f35cd573d1d219dbbf7458920de4 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437423 2125 scope.go:115] "RemoveContainer" containerID="dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437577 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489} err="failed to get container status \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": rpc error: code = NotFound desc = could not find container \"dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489\": container with ID starting with dfc8637879dc0c499d111b5b370bef5cd5bc65e8220424b06733959c470e0489 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437600 2125 scope.go:115] "RemoveContainer" containerID="4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437732 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca} err="failed to get container status \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": rpc error: code = NotFound desc = could not find container \"4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca\": container with ID starting with 4fab39e9c0d935f4418358ba565378bbdf29c988c618b79f7a54fe57406468ca not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437745 2125 scope.go:115] "RemoveContainer" containerID="901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437902 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957} err="failed to get container status \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": rpc error: code = NotFound desc = could not find container \"901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957\": container with ID starting with 901a40d69abe3181a0069657d88694b420c1339fbc1b0c456fe6eb2cb882a957 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.437913 2125 scope.go:115] "RemoveContainer" containerID="c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.438063 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d} err="failed to get container status \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": rpc error: code = NotFound desc = could not find container \"c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d\": container with ID starting with c35ca486c77b849659ffb75e4ce93cbe332e848a5e72ec6385e26f8804bfab2d not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.438078 2125 scope.go:115] "RemoveContainer" containerID="711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.438235 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5} err="failed to get container status \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": rpc error: code = NotFound desc = could not find container \"711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5\": container with ID starting with 711c6ba0f322af83f3c889531dfb14bf8d039b67cf895f7acfaecb2bfba5b0f5 not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.438250 2125 scope.go:115] "RemoveContainer" containerID="57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.438397 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb} err="failed to get container status \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": rpc error: code = NotFound desc = could not find container \"57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb\": container with ID starting with 57b61bc35883c39764a71a9a8c76d49d56fdffb4a02ee8be50a147d22a684adb not found: ID does not exist" Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod6dabd947_ddab_4fdb_9e78_cb27f3551554.slice. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod6dabd947_ddab_4fdb_9e78_cb27f3551554.slice: Consumed 9.113s CPU time Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.631572 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.640202 2125 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689637 2125 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689686 2125 topology_manager.go:205] "Topology Admit Handler" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.689753 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="prometheus-proxy" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689762 2125 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="prometheus-proxy" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.689776 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="kube-rbac-proxy-thanos" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689782 2125 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="kube-rbac-proxy-thanos" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.689791 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="config-reloader" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689807 2125 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="config-reloader" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.689814 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="kube-rbac-proxy" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689819 2125 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="kube-rbac-proxy" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.689828 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="thanos-sidecar" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689835 2125 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="thanos-sidecar" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.689846 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="init-config-reloader" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689852 2125 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="init-config-reloader" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:05.689859 2125 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="prometheus" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689866 2125 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="prometheus" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689911 2125 memory_manager.go:345] "RemoveStaleState removing state" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="thanos-sidecar" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689921 2125 memory_manager.go:345] "RemoveStaleState removing state" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="kube-rbac-proxy" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689929 2125 memory_manager.go:345] "RemoveStaleState removing state" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="prometheus-proxy" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689938 2125 memory_manager.go:345] "RemoveStaleState removing state" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="config-reloader" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689947 2125 memory_manager.go:345] "RemoveStaleState removing state" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="kube-rbac-proxy-thanos" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.689955 2125 memory_manager.go:345] "RemoveStaleState removing state" podUID="6dabd947-ddab-4fdb-9e78-cb27f3551554" containerName="prometheus" Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod44953449_e4f6_497a_b6bf_73fbdc9381b7.slice. Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.724591 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4fe37a564235ddc1e81bac88e8fa88b56ef748e11b06363fa87b70e5fd9b4a00-merged.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volume\x2dsubpaths-web\x2dconfig-prometheus-5.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-96de0024d6ea01ddf2021c942b3072f8b876b4c12ccb6cc770e3b4af096fff08-merged.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d41988b8c6137e3e50388cde291abb7678217a2d1343ce6595c0c9b145a51b9a-merged.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: run-netns-f75b780b\x2d87af\x2d4e85\x2db9a2\x2d72004867fd14.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: run-ipcns-f75b780b\x2d87af\x2d4e85\x2db9a2\x2d72004867fd14.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: run-utsns-f75b780b\x2d87af\x2d4e85\x2db9a2\x2d72004867fd14.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918-userdata-shm.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvl2lv.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dtls.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-config.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dthanos\x2dsidecar\x2dtls.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dproxy.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-secret\x2dmetrics\x2dclient\x2dcerts.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2detcd\x2dclient\x2dcerts.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6dabd947\x2dddab\x2d4fdb\x2d9e78\x2dcb27f3551554-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Succeeded. Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.799953 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800002 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-config\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800027 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800184 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-config-out\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800220 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800256 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800280 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800365 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800393 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800410 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-web-config\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800437 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtqf6\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-kube-api-access-rtqf6\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800482 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800540 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800588 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800628 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800672 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800715 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800770 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.800790 2125 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901274 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901349 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-config\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901377 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901404 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-config-out\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901429 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901459 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901482 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901513 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901541 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901565 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-web-config\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901593 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-rtqf6\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-kube-api-access-rtqf6\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901618 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901649 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901675 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901699 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901726 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901752 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901777 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.901816 2125 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.902388 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.902428 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.902639 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-config-out\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.903010 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.903199 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.903358 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.903585 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.903751 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.905906 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.906659 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.907120 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.907529 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-config\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.907847 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.908214 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.908845 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.909643 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.909968 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.910611 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-web-config\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:05.919327 2125 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtqf6\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-kube-api-access-rtqf6\") pod \"prometheus-k8s-0\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:06.003492 2125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.004016183Z" level=info msg="Running pod sandbox: openshift-monitoring/prometheus-k8s-0/POD" id=8329cea7-2b52-4b65-a0d2-e6ee0f6f079c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.004065450Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.021051264Z" level=info msg="Got pod network &{Name:prometheus-k8s-0 Namespace:openshift-monitoring ID:7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea UID:44953449-e4f6-497a-b6bf-73fbdc9381b7 NetNS:/var/run/netns/2c559b52-3d31-49d8-80de-874e670a2653 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.021087941Z" level=info msg="Adding pod openshift-monitoring_prometheus-k8s-0 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:45:06 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 7d9bb22d3d6b32a: link is not ready Feb 23 15:45:06 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 15:45:06 ip-10-0-136-68 systemd-udevd[8256]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 15:45:06 ip-10-0-136-68 systemd-udevd[8256]: Could not generate persistent MAC address for 7d9bb22d3d6b32a: No such file or directory Feb 23 15:45:06 ip-10-0-136-68 NetworkManager[1149]: [1677167106.1461] device (7d9bb22d3d6b32a): carrier: link connected Feb 23 15:45:06 ip-10-0-136-68 NetworkManager[1149]: [1677167106.1464] manager: (7d9bb22d3d6b32a): new Veth device (/org/freedesktop/NetworkManager/Devices/42) Feb 23 15:45:06 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 15:45:06 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 7d9bb22d3d6b32a: link becomes ready Feb 23 15:45:06 ip-10-0-136-68 kernel: device 7d9bb22d3d6b32a entered promiscuous mode Feb 23 15:45:06 ip-10-0-136-68 NetworkManager[1149]: [1677167106.1655] manager: (7d9bb22d3d6b32a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43) Feb 23 15:45:06 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00108|bridge|INFO|bridge br-int: added interface 7d9bb22d3d6b32a on port 18 Feb 23 15:45:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:06.227796 2125 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: I0223 15:45:06.142950 8246 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: 2023-02-23T15:45:06Z [verbose] Add: openshift-monitoring:prometheus-k8s-0:44953449-e4f6-497a-b6bf-73fbdc9381b7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7d9bb22d3d6b32a","mac":"da:1e:c8:ba:ed:91"},{"name":"eth0","mac":"0a:58:0a:81:02:0d","sandbox":"/var/run/netns/2c559b52-3d31-49d8-80de-874e670a2653"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.13/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: I0223 15:45:06.200914 8239 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"prometheus-k8s-0", UID:"44953449-e4f6-497a-b6bf-73fbdc9381b7", APIVersion:"v1", ResourceVersion:"25378", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.13/23] from ovn-kubernetes Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.229466783Z" level=info msg="Got pod network &{Name:prometheus-k8s-0 Namespace:openshift-monitoring ID:7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea UID:44953449-e4f6-497a-b6bf-73fbdc9381b7 NetNS:/var/run/netns/2c559b52-3d31-49d8-80de-874e670a2653 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.229581337Z" level=info msg="Checking pod openshift-monitoring_prometheus-k8s-0 for CNI network multus-cni-network (type=multus)" Feb 23 15:45:06 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:06.231135 2125 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44953449_e4f6_497a_b6bf_73fbdc9381b7.slice/crio-7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea.scope WatchSource:0}: Error finding container 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea: Status 404 returned error can't find the container with id 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.232433329Z" level=info msg="Ran pod sandbox 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea with infra container: openshift-monitoring/prometheus-k8s-0/POD" id=8329cea7-2b52-4b65-a0d2-e6ee0f6f079c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.233225706Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=dc2acef0-2961-4ed2-8c17-6e171f48e4d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.233408389Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dc2acef0-2961-4ed2-8c17-6e171f48e4d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.234457207Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=f9033d46-f79e-4b1f-82a1-bb4edb47df10 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.234616045Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f9033d46-f79e-4b1f-82a1-bb4edb47df10 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.235340011Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/init-config-reloader" id=9fac5450-4a4a-4044-92ad-250536b71d0a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.235432293Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:45:06 ip-10-0-136-68 systemd[1]: Started crio-conmon-2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485.scope. Feb 23 15:45:06 ip-10-0-136-68 systemd[1]: Started libcontainer container 2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485. Feb 23 15:45:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:06.312788 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerStarted Data:7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea} Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.323384475Z" level=info msg="Created container 2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485: openshift-monitoring/prometheus-k8s-0/init-config-reloader" id=9fac5450-4a4a-4044-92ad-250536b71d0a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.323753796Z" level=info msg="Starting container: 2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485" id=e04dd69a-072f-4c81-b0f8-9f49e5fe58e8 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:45:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:06.331572783Z" level=info msg="Started container" PID=8287 containerID=2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485 description=openshift-monitoring/prometheus-k8s-0/init-config-reloader id=e04dd69a-072f-4c81-b0f8-9f49e5fe58e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea Feb 23 15:45:06 ip-10-0-136-68 systemd[1]: crio-2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485.scope: Succeeded. Feb 23 15:45:06 ip-10-0-136-68 systemd[1]: crio-2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485.scope: Consumed 24ms CPU time Feb 23 15:45:06 ip-10-0-136-68 systemd[1]: crio-conmon-2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485.scope: Succeeded. Feb 23 15:45:06 ip-10-0-136-68 systemd[1]: crio-conmon-2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485.scope: Consumed 19ms CPU time Feb 23 15:45:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:06.398350 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6dabd947-ddab-4fdb-9e78-cb27f3551554 path="/var/lib/kubelet/pods/6dabd947-ddab-4fdb-9e78-cb27f3551554/volumes" Feb 23 15:45:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:07.316152 2125 generic.go:296] "Generic (PLEG): container finished" podID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerID="2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485" exitCode=0 Feb 23 15:45:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:07.316188 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerDied Data:2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485} Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.316719614Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434" id=41d1097d-d788-41c9-bc4d-e987f10b686c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.316863295Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7d6a7a794d1c53f9801c5c0cd31acc0bbeac302f72326d692b09c25b56dec99d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434],Size_:466962930,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=41d1097d-d788-41c9-bc4d-e987f10b686c name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.320538159Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434" id=d853c960-2e7f-4929-b1c0-417ebf08e461 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.320710263Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7d6a7a794d1c53f9801c5c0cd31acc0bbeac302f72326d692b09c25b56dec99d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434],Size_:466962930,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d853c960-2e7f-4929-b1c0-417ebf08e461 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.321750613Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/prometheus" id=f0e890c8-0e1f-4fdd-a7cd-4374359d7428 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.321874146Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:45:07 ip-10-0-136-68 systemd[1]: Started crio-conmon-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9.scope. Feb 23 15:45:07 ip-10-0-136-68 systemd[1]: Started libcontainer container ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9. Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.429440521Z" level=info msg="Created container ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9: openshift-monitoring/prometheus-k8s-0/prometheus" id=f0e890c8-0e1f-4fdd-a7cd-4374359d7428 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.429864599Z" level=info msg="Starting container: ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9" id=b7933d6a-3220-410c-9526-81b3cfb491e7 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.453362959Z" level=info msg="Started container" PID=8354 containerID=ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9 description=openshift-monitoring/prometheus-k8s-0/prometheus id=b7933d6a-3220-410c-9526-81b3cfb491e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.462127939Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=dafcae1b-f9a5-4a53-b01d-16dbf3062b16 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.462356175Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dafcae1b-f9a5-4a53-b01d-16dbf3062b16 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.462919478Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=af9c4051-3e69-4826-beeb-de342af04658 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.463073256Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=af9c4051-3e69-4826-beeb-de342af04658 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.463780465Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/config-reloader" id=a1f50000-8ec4-4199-afcb-02642652ca9b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.463884629Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:45:07 ip-10-0-136-68 systemd[1]: Started crio-conmon-84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd.scope. Feb 23 15:45:07 ip-10-0-136-68 systemd[1]: Started libcontainer container 84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd. Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.591536985Z" level=info msg="Created container 84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd: openshift-monitoring/prometheus-k8s-0/config-reloader" id=a1f50000-8ec4-4199-afcb-02642652ca9b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.595250507Z" level=info msg="Starting container: 84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd" id=ff3fa7cc-39e2-4844-b949-f5d224818a32 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.628053598Z" level=info msg="Started container" PID=8399 containerID=84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd description=openshift-monitoring/prometheus-k8s-0/config-reloader id=ff3fa7cc-39e2-4844-b949-f5d224818a32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.648543011Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=9a06effc-98bc-4c49-8157-7c0ca57aa06b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.648823638Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9a06effc-98bc-4c49-8157-7c0ca57aa06b name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.649604235Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=c1ef0efe-1b7c-4b58-8e2c-39f2a53a74fd name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.649731534Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c1ef0efe-1b7c-4b58-8e2c-39f2a53a74fd name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.650462565Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/thanos-sidecar" id=9277728b-84a9-44a9-8053-746f5fd51915 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.650530896Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:45:07 ip-10-0-136-68 systemd[1]: Started crio-conmon-0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df.scope. Feb 23 15:45:07 ip-10-0-136-68 systemd[1]: Started libcontainer container 0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df. Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.950631207Z" level=info msg="Created container 0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df: openshift-monitoring/prometheus-k8s-0/thanos-sidecar" id=9277728b-84a9-44a9-8053-746f5fd51915 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.951075390Z" level=info msg="Starting container: 0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df" id=4bb6b330-3cbf-44e9-bd5b-3cb7245dd845 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.967428208Z" level=info msg="Started container" PID=8450 containerID=0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df description=openshift-monitoring/prometheus-k8s-0/thanos-sidecar id=4bb6b330-3cbf-44e9-bd5b-3cb7245dd845 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.982366857Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=4f2a427b-6544-4804-90c4-a685403e4055 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.982548947Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4f2a427b-6544-4804-90c4-a685403e4055 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.983996159Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=5b147f41-ef4f-4a4b-8ed9-e85f4de135ed name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.984136466Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5b147f41-ef4f-4a4b-8ed9-e85f4de135ed name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.984953080Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/prometheus-proxy" id=d76bc1a0-0e95-4f80-9a9a-cb8ff25d6c97 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:07.985035526Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:45:08 ip-10-0-136-68 systemd[1]: Started crio-conmon-e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b.scope. Feb 23 15:45:08 ip-10-0-136-68 systemd[1]: run-runc-e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b-runc.MmneUl.mount: Succeeded. Feb 23 15:45:08 ip-10-0-136-68 systemd[1]: Started libcontainer container e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b. Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.111871417Z" level=info msg="Created container e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b: openshift-monitoring/prometheus-k8s-0/prometheus-proxy" id=d76bc1a0-0e95-4f80-9a9a-cb8ff25d6c97 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.112342853Z" level=info msg="Starting container: e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b" id=ec3db274-e4d9-4064-b98f-3538db267385 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.118184091Z" level=info msg="Started container" PID=8500 containerID=e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b description=openshift-monitoring/prometheus-k8s-0/prometheus-proxy id=ec3db274-e4d9-4064-b98f-3538db267385 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.126932499Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=d1b34ca4-99ff-4a21-8c43-f3dd262c37a5 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.127083762Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d1b34ca4-99ff-4a21-8c43-f3dd262c37a5 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.127668355Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=6a3a57a1-4dfd-47a5-b605-2d85ead0cb48 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.127770063Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6a3a57a1-4dfd-47a5-b605-2d85ead0cb48 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.128531916Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy" id=a87717c8-35cd-44a5-ad35-b22b5c4b7374 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.128594111Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:45:08 ip-10-0-136-68 systemd[1]: Started crio-conmon-f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4.scope. Feb 23 15:45:08 ip-10-0-136-68 systemd[1]: Started libcontainer container f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4. Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.229790007Z" level=info msg="Created container f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy" id=a87717c8-35cd-44a5-ad35-b22b5c4b7374 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.230199287Z" level=info msg="Starting container: f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4" id=3bb3edc1-4078-4105-b25c-fef5da1abc8c name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.236165653Z" level=info msg="Started container" PID=8544 containerID=f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4 description=openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy id=3bb3edc1-4078-4105-b25c-fef5da1abc8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.245035872Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=4603dbe5-dba9-4ed3-ac11-25fd2edc95bb name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.245221421Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4603dbe5-dba9-4ed3-ac11-25fd2edc95bb name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.245845887Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=3834a89a-33dc-4976-9bbb-058cd9fbc46f name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.245984640Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3834a89a-33dc-4976-9bbb-058cd9fbc46f name=/runtime.v1.ImageService/ImageStatus Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.246795941Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos" id=d8bae21d-69a0-49aa-a8d4-60912dcd28c5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.246870847Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 15:45:08 ip-10-0-136-68 systemd[1]: Started crio-conmon-6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0.scope. Feb 23 15:45:08 ip-10-0-136-68 systemd[1]: Started libcontainer container 6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0. Feb 23 15:45:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:08.320129 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerStarted Data:f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4} Feb 23 15:45:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:08.320162 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerStarted Data:e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b} Feb 23 15:45:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:08.320173 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerStarted Data:0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df} Feb 23 15:45:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:08.320187 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerStarted Data:84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd} Feb 23 15:45:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:08.320199 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerStarted Data:ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9} Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.335158718Z" level=info msg="Created container 6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos" id=d8bae21d-69a0-49aa-a8d4-60912dcd28c5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.335468211Z" level=info msg="Starting container: 6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0" id=0d204803-8234-4b08-bfb5-0d7d8788f5f0 name=/runtime.v1.RuntimeService/StartContainer Feb 23 15:45:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:08.341043514Z" level=info msg="Started container" PID=8590 containerID=6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0 description=openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos id=0d204803-8234-4b08-bfb5-0d7d8788f5f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea Feb 23 15:45:09 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:09.324421 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerStarted Data:6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0} Feb 23 15:45:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00109|connmgr|INFO|br-ex<->unix#58: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:45:11 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:11.004176 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:19 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:19.592092 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-0] Feb 23 15:45:19 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:19.592374 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerName="config-reloader" containerID="cri-o://9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565" gracePeriod=120 Feb 23 15:45:19 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:19.592624 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerName="alertmanager" containerID="cri-o://c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055" gracePeriod=120 Feb 23 15:45:19 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:19.592690 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerName="prom-label-proxy" containerID="cri-o://7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319" gracePeriod=120 Feb 23 15:45:19 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:19.592733 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerName="kube-rbac-proxy-metric" containerID="cri-o://cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74" gracePeriod=120 Feb 23 15:45:19 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:19.592775 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerName="kube-rbac-proxy" containerID="cri-o://079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c" gracePeriod=120 Feb 23 15:45:19 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:19.592811 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerName="alertmanager-proxy" containerID="cri-o://576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36" gracePeriod=120 Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.593201266Z" level=info msg="Stopping container: 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36 (timeout: 120s)" id=c20d31ac-a5de-4023-af3b-d72a84b7a6db name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.594711476Z" level=info msg="Stopping container: 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319 (timeout: 120s)" id=c5db3eb5-f3b4-45d8-aa29-9d88333b842e name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.594872779Z" level=info msg="Stopping container: 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c (timeout: 120s)" id=7d10bf64-78d3-4adb-aff3-1b08a38188e0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.594973351Z" level=info msg="Stopping container: cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74 (timeout: 120s)" id=8810e2fc-ebde-4dda-afe6-7109b774fd5a name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.595045805Z" level=info msg="Stopping container: 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565 (timeout: 120s)" id=7cf9b613-b294-47fc-99c8-c9de6aa7aefd name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.595134907Z" level=info msg="Stopping container: c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055 (timeout: 120s)" id=d05dc80e-217e-48ce-9fa2-1dfe889ed8e9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 conmon[6140]: conmon 576dacf73a229c9814c6 : container 6158 exited with status 2 Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36.scope: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36.scope: Consumed 120ms CPU time Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-conmon-576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36.scope: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-conmon-576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36.scope: Consumed 19ms CPU time Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c.scope: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c.scope: Consumed 41ms CPU time Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-conmon-079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c.scope: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-conmon-079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c.scope: Consumed 19ms CPU time Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319.scope: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319.scope: Consumed 22ms CPU time Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-conmon-7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319.scope: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-conmon-7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319.scope: Consumed 20ms CPU time Feb 23 15:45:19 ip-10-0-136-68 conmon[6062]: conmon 9809a1abde73c559475a : container 6074 exited with status 2 Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0b9849925112c183e5e5c4d718a4b1b204e8ed3626942df8716b4e22d6ef3bf5-merged.mount: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565.scope: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565.scope: Consumed 28ms CPU time Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-conmon-9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565.scope: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: crio-conmon-9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565.scope: Consumed 20ms CPU time Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.639459392Z" level=info msg="Stopped container 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36: openshift-monitoring/alertmanager-main-0/alertmanager-proxy" id=c20d31ac-a5de-4023-af3b-d72a84b7a6db name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d9da77c6904e6415f709f14fd2c746d4ca96defc3136558db931801029d30160-merged.mount: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-59c9ded2d5d8a8164879ea29ba6476393967ae2d23a39f48c2aace8069cca37a-merged.mount: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.746312062Z" level=info msg="Stopped container 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c: openshift-monitoring/alertmanager-main-0/kube-rbac-proxy" id=7d10bf64-78d3-4adb-aff3-1b08a38188e0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-088da6cefbd714ce9e7ba8d1b72f3fca4ce3fbbecb0761606a1791d61c56850a-merged.mount: Succeeded. Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.754272194Z" level=info msg="Stopped container 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565: openshift-monitoring/alertmanager-main-0/config-reloader" id=7cf9b613-b294-47fc-99c8-c9de6aa7aefd name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:19 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:19.757594372Z" level=info msg="Stopped container 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319: openshift-monitoring/alertmanager-main-0/prom-label-proxy" id=c5db3eb5-f3b4-45d8-aa29-9d88333b842e name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: crio-c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055.scope: Succeeded. Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: crio-c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055.scope: Consumed 180ms CPU time Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: crio-conmon-c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055.scope: Succeeded. Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: crio-conmon-c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055.scope: Consumed 21ms CPU time Feb 23 15:45:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:20.247155353Z" level=info msg="Stopped container c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055: openshift-monitoring/alertmanager-main-0/alertmanager" id=d05dc80e-217e-48ce-9fa2-1dfe889ed8e9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356238 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager-proxy/0.log" Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356523 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/config-reloader/0.log" Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356762 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager/0.log" Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356798 2125 generic.go:296] "Generic (PLEG): container finished" podID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerID="c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055" exitCode=0 Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356807 2125 generic.go:296] "Generic (PLEG): container finished" podID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerID="7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319" exitCode=0 Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356814 2125 generic.go:296] "Generic (PLEG): container finished" podID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerID="079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c" exitCode=0 Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356822 2125 generic.go:296] "Generic (PLEG): container finished" podID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerID="576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36" exitCode=2 Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356829 2125 generic.go:296] "Generic (PLEG): container finished" podID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerID="9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565" exitCode=2 Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356850 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerDied Data:c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055} Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356874 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerDied Data:7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319} Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356884 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerDied Data:079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c} Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356892 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerDied Data:576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36} Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356899 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerDied Data:9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565} Feb 23 15:45:20 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:20.356910 2125 scope.go:115] "RemoveContainer" containerID="52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c" Feb 23 15:45:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:20.357544927Z" level=info msg="Removing container: 52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c" id=cb124f52-c5a2-4bfd-a107-fab8e12c0c14 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:20.379498575Z" level=info msg="Removed container 52c92d084eb346ba5baa43a2f9bdb0d4783fad46439e46e4ffcd6604d0a5e68c: openshift-monitoring/alertmanager-main-0/alertmanager" id=cb124f52-c5a2-4bfd-a107-fab8e12c0c14 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c1ca95929c5a216067d6165b105dd24c9666765071d1248b24f6eb819e14f7f2-merged.mount: Succeeded. Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b4a446f4046b63586139ba55db1a3163497faf6c86bf02b9ba83ed0c1356ed8f-merged.mount: Succeeded. Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: crio-cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74.scope: Succeeded. Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: crio-cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74.scope: Consumed 66ms CPU time Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: crio-conmon-cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74.scope: Succeeded. Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: crio-conmon-cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74.scope: Consumed 19ms CPU time Feb 23 15:45:20 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-943464c6b5d548df097bce885cbd0737d7fce19df1b9eab6cc2182909b62c60e-merged.mount: Succeeded. Feb 23 15:45:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:20.743728394Z" level=info msg="Stopped container cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74: openshift-monitoring/alertmanager-main-0/kube-rbac-proxy-metric" id=8810e2fc-ebde-4dda-afe6-7109b774fd5a name=/runtime.v1.RuntimeService/StopContainer Feb 23 15:45:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:20.744121764Z" level=info msg="Stopping pod sandbox: fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be" id=3c8da46a-ab21-412e-aab7-2e8dd4c1ec83 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 15:45:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:20.744262792Z" level=info msg="Got pod network &{Name:alertmanager-main-0 Namespace:openshift-monitoring ID:fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be UID:3ac4a081-240b-441a-af97-d682fecb3ae7 NetNS:/var/run/netns/25ff3566-52b5-4f2b-a3c1-a0c8d76cdece Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 15:45:20 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:20.744393391Z" level=info msg="Deleting pod openshift-monitoring_alertmanager-main-0 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 15:45:20 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00110|bridge|INFO|bridge br-int: deleted interface fefe52f4c0671ea on port 15 Feb 23 15:45:20 ip-10-0-136-68 kernel: device fefe52f4c0671ea left promiscuous mode Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.004164 2125 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: 2023-02-23T15:45:20Z [verbose] Del: openshift-monitoring:alertmanager-main-0:3ac4a081-240b-441a-af97-d682fecb3ae7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: I0223 15:45:20.883474 8986 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.055079 2125 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8baa967fe91dd1f66d3a9c77cf6fc179135b628b3a0269b244809de16f891585-merged.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.121370039Z" level=info msg="Stopped pod sandbox: fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be" id=3c8da46a-ab21-412e-aab7-2e8dd4c1ec83 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.127149 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager-proxy/0.log" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.127465 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/config-reloader/0.log" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.241757 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-proxy\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.241810 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.241839 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-tls\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.241866 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-config-out\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.241894 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-main-db\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.241924 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.241952 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-metrics-client-ca\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.241981 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9kht\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-kube-api-access-v9kht\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.242013 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-trusted-ca-bundle\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.242042 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-tls-assets\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.242067 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-config-volume\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.242099 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-web-config\") pod \"3ac4a081-240b-441a-af97-d682fecb3ae7\" (UID: \"3ac4a081-240b-441a-af97-d682fecb3ae7\") " Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:21.242225 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/3ac4a081-240b-441a-af97-d682fecb3ae7/volumes/kubernetes.io~configmap/alertmanager-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:21.242401 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/3ac4a081-240b-441a-af97-d682fecb3ae7/volumes/kubernetes.io~empty-dir/config-out: clearQuota called, but quotas disabled Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.242433 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.242487 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-config-out" (OuterVolumeSpecName: "config-out") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:21.242539 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/3ac4a081-240b-441a-af97-d682fecb3ae7/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: W0223 15:45:21.242583 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/3ac4a081-240b-441a-af97-d682fecb3ae7/volumes/kubernetes.io~empty-dir/alertmanager-main-db: clearQuota called, but quotas disabled Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.242651 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.242669 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.249565 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-proxy" (OuterVolumeSpecName: "secret-alertmanager-main-proxy") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "secret-alertmanager-main-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.249589 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.250604 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.250631 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.255527 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.257481 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-kube-api-access-v9kht" (OuterVolumeSpecName: "kube-api-access-v9kht") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "kube-api-access-v9kht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.258493 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-config-volume" (OuterVolumeSpecName: "config-volume") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.264480 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-web-config" (OuterVolumeSpecName: "web-config") pod "3ac4a081-240b-441a-af97-d682fecb3ae7" (UID: "3ac4a081-240b-441a-af97-d682fecb3ae7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343338 2125 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343363 2125 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343376 2125 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-main-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343389 2125 reconciler.go:399] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-config-out\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343402 2125 reconciler.go:399] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-main-db\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343415 2125 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343429 2125 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343443 2125 reconciler.go:399] "Volume detached for volume \"kube-api-access-v9kht\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-kube-api-access-v9kht\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343458 2125 reconciler.go:399] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ac4a081-240b-441a-af97-d682fecb3ae7-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343473 2125 reconciler.go:399] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ac4a081-240b-441a-af97-d682fecb3ae7-tls-assets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343487 2125 reconciler.go:399] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-config-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.343501 2125 reconciler.go:399] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ac4a081-240b-441a-af97-d682fecb3ae7-web-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.360605 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/alertmanager-proxy/0.log" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.360876 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3ac4a081-240b-441a-af97-d682fecb3ae7/config-reloader/0.log" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.360913 2125 generic.go:296] "Generic (PLEG): container finished" podID=3ac4a081-240b-441a-af97-d682fecb3ae7 containerID="cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74" exitCode=0 Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.360996 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerDied Data:cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74} Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.361031 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event=&{ID:3ac4a081-240b-441a-af97-d682fecb3ae7 Type:ContainerDied Data:fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be} Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.361051 2125 scope.go:115] "RemoveContainer" containerID="c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055" Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.362210133Z" level=info msg="Removing container: c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055" id=53f7402d-6606-4635-8e5e-651a88e9ccc2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod3ac4a081_240b_441a_af97_d682fecb3ae7.slice. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod3ac4a081_240b_441a_af97_d682fecb3ae7.slice: Consumed 693ms CPU time Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.382790832Z" level=info msg="Removed container c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055: openshift-monitoring/alertmanager-main-0/alertmanager" id=53f7402d-6606-4635-8e5e-651a88e9ccc2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.383558281Z" level=info msg="Removing container: 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319" id=cbef8173-f6d6-4028-ace4-8efa4c1d9f29 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.382940 2125 scope.go:115] "RemoveContainer" containerID="7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.398441 2125 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.399635 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-0] Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.402150947Z" level=info msg="Removed container 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319: openshift-monitoring/alertmanager-main-0/prom-label-proxy" id=cbef8173-f6d6-4028-ace4-8efa4c1d9f29 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.403382 2125 scope.go:115] "RemoveContainer" containerID="cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74" Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.404070135Z" level=info msg="Removing container: cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74" id=7488279d-8ebf-49c7-a802-28264a2c734d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.406087 2125 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/alertmanager-main-0] Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.419323646Z" level=info msg="Removed container cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74: openshift-monitoring/alertmanager-main-0/kube-rbac-proxy-metric" id=7488279d-8ebf-49c7-a802-28264a2c734d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.419451 2125 scope.go:115] "RemoveContainer" containerID="079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c" Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.420023981Z" level=info msg="Removing container: 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c" id=17c197d2-7786-4249-a8cb-21ee5b519b2f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.448574063Z" level=info msg="Removed container 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c: openshift-monitoring/alertmanager-main-0/kube-rbac-proxy" id=17c197d2-7786-4249-a8cb-21ee5b519b2f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.448783 2125 scope.go:115] "RemoveContainer" containerID="576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36" Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.449515425Z" level=info msg="Removing container: 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36" id=d2e62d1b-ee09-4385-a7df-a9a0e7d84fb7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.464769986Z" level=info msg="Removed container 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36: openshift-monitoring/alertmanager-main-0/alertmanager-proxy" id=d2e62d1b-ee09-4385-a7df-a9a0e7d84fb7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.464911 2125 scope.go:115] "RemoveContainer" containerID="9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565" Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.465535447Z" level=info msg="Removing container: 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565" id=63931927-7a47-4eb6-98b5-1449eb74b19c name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:21.483819637Z" level=info msg="Removed container 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565: openshift-monitoring/alertmanager-main-0/config-reloader" id=63931927-7a47-4eb6-98b5-1449eb74b19c name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.483958 2125 scope.go:115] "RemoveContainer" containerID="c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:21.484157 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055\": container with ID starting with c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055 not found: ID does not exist" containerID="c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.484191 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055} err="failed to get container status \"c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055\": rpc error: code = NotFound desc = could not find container \"c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055\": container with ID starting with c2586b634081d6205f0a674b91294e0a6950ad212c6735ec0f682f4bc485f055 not found: ID does not exist" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.484204 2125 scope.go:115] "RemoveContainer" containerID="7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:21.484396 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319\": container with ID starting with 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319 not found: ID does not exist" containerID="7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.484423 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319} err="failed to get container status \"7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319\": rpc error: code = NotFound desc = could not find container \"7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319\": container with ID starting with 7f90f42615a665c5a9b7a22d2326628070f1164790b68c71cf37239a15de2319 not found: ID does not exist" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.484435 2125 scope.go:115] "RemoveContainer" containerID="cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:21.484633 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74\": container with ID starting with cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74 not found: ID does not exist" containerID="cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.484650 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74} err="failed to get container status \"cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74\": rpc error: code = NotFound desc = could not find container \"cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74\": container with ID starting with cd257d71bd8628055502cce3d5a72d71454f209dc2a53dcb5acc57e391829a74 not found: ID does not exist" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.484657 2125 scope.go:115] "RemoveContainer" containerID="079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:21.484810 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c\": container with ID starting with 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c not found: ID does not exist" containerID="079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.484833 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c} err="failed to get container status \"079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c\": rpc error: code = NotFound desc = could not find container \"079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c\": container with ID starting with 079cc4a1c1b5dfcefc02abb5f62bfc8df39ec7cb44dfc198aa7bd571d6a79e9c not found: ID does not exist" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.484840 2125 scope.go:115] "RemoveContainer" containerID="576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:21.484986 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36\": container with ID starting with 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36 not found: ID does not exist" containerID="576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.485002 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36} err="failed to get container status \"576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36\": rpc error: code = NotFound desc = could not find container \"576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36\": container with ID starting with 576dacf73a229c9814c676f2be48e1a950d9aa46e191775dac565c11c3d5ef36 not found: ID does not exist" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.485008 2125 scope.go:115] "RemoveContainer" containerID="9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: E0223 15:45:21.485176 2125 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565\": container with ID starting with 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565 not found: ID does not exist" containerID="9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565" Feb 23 15:45:21 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:21.485198 2125 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565} err="failed to get container status \"9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565\": rpc error: code = NotFound desc = could not find container \"9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565\": container with ID starting with 9809a1abde73c559475a32ef16160b19cdd5785df6dfc27074a6164722c6b565 not found: ID does not exist" Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volume\x2dsubpaths-web\x2dconfig-alertmanager-9.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: run-netns-25ff3566\x2d52b5\x2d4f2b\x2da3c1\x2da0c8d76cdece.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: run-ipcns-25ff3566\x2d52b5\x2d4f2b\x2da3c1\x2da0c8d76cdece.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: run-utsns-25ff3566\x2d52b5\x2d4f2b\x2da3c1\x2da0c8d76cdece.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be-userdata-shm.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv9kht.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dproxy.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volumes-kubernetes.io\x7esecret-config\x2dvolume.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy\x2dmetric.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dtls.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Succeeded. Feb 23 15:45:21 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ac4a081\x2d240b\x2d441a\x2daf97\x2dd682fecb3ae7-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 15:45:22 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:45:22.397793 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3ac4a081-240b-441a-af97-d682fecb3ae7 path="/var/lib/kubelet/pods/3ac4a081-240b-441a-af97-d682fecb3ae7/volumes" Feb 23 15:45:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00111|connmgr|INFO|br-ex<->unix#67: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:45:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.WW1cmW.mount: Succeeded. Feb 23 15:45:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00112|connmgr|INFO|br-int<->unix#2: 402 flow_mods in the 46 s starting 52 s ago (176 adds, 226 deletes) Feb 23 15:45:36 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.Ripr6J.mount: Succeeded. Feb 23 15:45:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00113|connmgr|INFO|br-ex<->unix#71: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:45:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:42.256211534Z" level=info msg="Stopping pod sandbox: fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be" id=d8e928cf-bb84-4e7c-8084-f238bea96dc5 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 15:45:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:42.256251754Z" level=info msg="Stopped pod sandbox (already stopped): fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be" id=d8e928cf-bb84-4e7c-8084-f238bea96dc5 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 15:45:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:42.256623397Z" level=info msg="Removing pod sandbox: fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be" id=404ed91d-d68e-435f-bc79-451785c6e6b0 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 15:45:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:42.317919801Z" level=info msg="Removed pod sandbox: fefe52f4c0671eab0be744708d6f2fdf3974dd556275bbdf477c3bb9235603be" id=404ed91d-d68e-435f-bc79-451785c6e6b0 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 15:45:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:42.322451494Z" level=info msg="Stopping pod sandbox: e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918" id=099a9133-257a-42e1-854c-0615024a0d5e name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 15:45:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:42.322491093Z" level=info msg="Stopped pod sandbox (already stopped): e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918" id=099a9133-257a-42e1-854c-0615024a0d5e name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 15:45:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:42.322839645Z" level=info msg="Removing pod sandbox: e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918" id=d0f49430-6604-46b7-a164-5877f4138030 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 15:45:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:45:42.338725810Z" level=info msg="Removed pod sandbox: e274954033b5b3194b29410871984fb0b7367e347a5b59b7f27389c2a0694918" id=d0f49430-6604-46b7-a164-5877f4138030 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 15:45:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00114|connmgr|INFO|br-ex<->unix#80: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:46:06 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.wORtl5.mount: Succeeded. Feb 23 15:46:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00115|connmgr|INFO|br-ex<->unix#84: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:46:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00116|connmgr|INFO|br-ex<->unix#93: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:46:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00117|connmgr|INFO|br-int<->unix#2: 102 flow_mods in the 49 s starting 57 s ago (67 adds, 35 deletes) Feb 23 15:46:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00118|connmgr|INFO|br-ex<->unix#97: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:46:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00119|connmgr|INFO|br-ex<->unix#106: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:47:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00120|connmgr|INFO|br-ex<->unix#110: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:47:16 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.UmqdBW.mount: Succeeded. Feb 23 15:47:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00121|connmgr|INFO|br-ex<->unix#119: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:47:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00122|connmgr|INFO|br-int<->unix#2: 62 flow_mods in the last 52 s (31 adds, 31 deletes) Feb 23 15:47:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00123|connmgr|INFO|br-ex<->unix#123: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:47:42 ip-10-0-136-68 kubenswrapper[2125]: I0223 15:47:42.191269 2125 kubelet.go:1343] "Image garbage collection succeeded" Feb 23 15:47:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:47:42.307678545Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=638727c3-d078-4f70-a70e-431a79bbeb76 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:47:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:47:42.307882095Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=638727c3-d078-4f70-a70e-431a79bbeb76 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:47:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00124|connmgr|INFO|br-ex<->unix#132: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:48:01 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.Ly9kUb.mount: Succeeded. Feb 23 15:48:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00125|connmgr|INFO|br-ex<->unix#136: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:48:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00126|connmgr|INFO|br-ex<->unix#146: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:48:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00127|connmgr|INFO|br-int<->unix#2: 61 flow_mods in the 48 s starting 58 s ago (32 adds, 29 deletes) Feb 23 15:48:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00128|connmgr|INFO|br-ex<->unix#150: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:48:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00129|connmgr|INFO|br-ex<->unix#159: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:49:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00130|connmgr|INFO|br-ex<->unix#163: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:49:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00131|connmgr|INFO|br-ex<->unix#172: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:49:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00132|connmgr|INFO|br-int<->unix#2: 38 flow_mods in the 39 s starting 49 s ago (16 adds, 22 deletes) Feb 23 15:49:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00133|connmgr|INFO|br-ex<->unix#176: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:49:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00134|connmgr|INFO|br-ex<->unix#185: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:50:06 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.WRCi4T.mount: Succeeded. Feb 23 15:50:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00135|connmgr|INFO|br-ex<->unix#189: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:50:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00136|connmgr|INFO|br-ex<->unix#198: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:50:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00137|connmgr|INFO|br-int<->unix#2: 18 flow_mods in the 45 s starting 46 s ago (11 adds, 7 deletes) Feb 23 15:50:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00138|connmgr|INFO|br-ex<->unix#202: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:50:41 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.cjULbx.mount: Succeeded. Feb 23 15:50:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00139|connmgr|INFO|br-ex<->unix#211: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:51:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00140|connmgr|INFO|br-ex<->unix#215: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:51:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00141|connmgr|INFO|br-ex<->unix#224: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:51:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00142|connmgr|INFO|br-int<->unix#2: 38 flow_mods in the last 49 s (18 adds, 20 deletes) Feb 23 15:51:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00143|connmgr|INFO|br-ex<->unix#228: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:51:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00144|connmgr|INFO|br-ex<->unix#237: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:52:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00145|connmgr|INFO|br-ex<->unix#241: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:52:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00146|connmgr|INFO|br-ex<->unix#250: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:52:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.KwAM5A.mount: Succeeded. Feb 23 15:52:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00147|connmgr|INFO|br-int<->unix#2: 22 flow_mods in the 50 s starting 57 s ago (12 adds, 10 deletes) Feb 23 15:52:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00148|connmgr|INFO|br-ex<->unix#254: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:52:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:52:42.310503459Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=b860e3f2-159c-4fd7-8488-0e3c8a22b48a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:52:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:52:42.310712896Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b860e3f2-159c-4fd7-8488-0e3c8a22b48a name=/runtime.v1.ImageService/ImageStatus Feb 23 15:52:46 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.OyGjRF.mount: Succeeded. Feb 23 15:52:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00149|connmgr|INFO|br-ex<->unix#263: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:53:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00150|connmgr|INFO|br-ex<->unix#267: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:53:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00151|connmgr|INFO|br-ex<->unix#277: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:53:32 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00152|connmgr|INFO|br-int<->unix#2: 36 flow_mods in the 40 s starting 41 s ago (17 adds, 19 deletes) Feb 23 15:53:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00153|connmgr|INFO|br-ex<->unix#281: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:53:51 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.dJLyWM.mount: Succeeded. Feb 23 15:53:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00154|connmgr|INFO|br-ex<->unix#290: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:54:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00155|connmgr|INFO|br-ex<->unix#294: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:54:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00156|connmgr|INFO|br-ex<->unix#303: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:54:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.wham7Z.mount: Succeeded. Feb 23 15:54:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00157|connmgr|INFO|br-ex<->unix#307: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:54:46 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.b8nca6.mount: Succeeded. Feb 23 15:54:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00158|connmgr|INFO|br-ex<->unix#316: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:55:01 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.q57Dw0.mount: Succeeded. Feb 23 15:55:06 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.uJuM8m.mount: Succeeded. Feb 23 15:55:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00159|connmgr|INFO|br-ex<->unix#320: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:55:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00160|connmgr|INFO|br-int<->unix#2: 14 flow_mods in the 5 s starting 10 s ago (7 adds, 7 deletes) Feb 23 15:55:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00161|connmgr|INFO|br-ex<->unix#329: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:55:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00162|connmgr|INFO|br-ex<->unix#333: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:55:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00163|connmgr|INFO|br-ex<->unix#342: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:56:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00164|connmgr|INFO|br-ex<->unix#346: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:56:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00165|connmgr|INFO|br-int<->unix#2: 7 flow_mods 49 s ago (4 adds, 3 deletes) Feb 23 15:56:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00166|connmgr|INFO|br-ex<->unix#355: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:56:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00167|connmgr|INFO|br-ex<->unix#359: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:56:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00168|connmgr|INFO|br-ex<->unix#368: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:57:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00169|connmgr|INFO|br-ex<->unix#372: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:57:21 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.pT4frX.mount: Succeeded. Feb 23 15:57:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00170|connmgr|INFO|br-ex<->unix#381: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:57:25 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00171|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 15:57:29 ip-10-0-136-68 systemd[1]: Starting Cleanup of Temporary Directories... Feb 23 15:57:29 ip-10-0-136-68 systemd-tmpfiles[23565]: [/usr/lib/tmpfiles.d/pkg-dbus-daemon.conf:1] Duplicate line for path "/var/lib/dbus", ignoring. Feb 23 15:57:29 ip-10-0-136-68 systemd-tmpfiles[23565]: [/usr/lib/tmpfiles.d/tmp.conf:12] Duplicate line for path "/var/tmp", ignoring. Feb 23 15:57:29 ip-10-0-136-68 systemd-tmpfiles[23565]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Feb 23 15:57:29 ip-10-0-136-68 systemd-tmpfiles[23565]: [/usr/lib/tmpfiles.d/var.conf:19] Duplicate line for path "/var/cache", ignoring. Feb 23 15:57:29 ip-10-0-136-68 systemd-tmpfiles[23565]: [/usr/lib/tmpfiles.d/var.conf:21] Duplicate line for path "/var/lib", ignoring. Feb 23 15:57:29 ip-10-0-136-68 systemd-tmpfiles[23565]: [/usr/lib/tmpfiles.d/var.conf:23] Duplicate line for path "/var/spool", ignoring. Feb 23 15:57:29 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-clean.service: Succeeded. Feb 23 15:57:29 ip-10-0-136-68 systemd[1]: Started Cleanup of Temporary Directories. Feb 23 15:57:29 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-clean.service: Consumed 15ms CPU time Feb 23 15:57:36 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.queziv.mount: Succeeded. Feb 23 15:57:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00172|connmgr|INFO|br-ex<->unix#385: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:57:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:57:42.313677488Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=d7723ff4-d258-4e8a-aa39-5e1610a4a3a3 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:57:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 15:57:42.313887760Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d7723ff4-d258-4e8a-aa39-5e1610a4a3a3 name=/runtime.v1.ImageService/ImageStatus Feb 23 15:57:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00173|connmgr|INFO|br-ex<->unix#394: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:58:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00174|connmgr|INFO|br-ex<->unix#398: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:58:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00175|connmgr|INFO|br-ex<->unix#408: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:58:25 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00176|connmgr|INFO|br-int<->unix#2: 1 flow_mods 56 s ago (1 deletes) Feb 23 15:58:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.oNAHVV.mount: Succeeded. Feb 23 15:58:36 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.mqwNXK.mount: Succeeded. Feb 23 15:58:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00177|connmgr|INFO|br-ex<->unix#412: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:58:51 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.kmcjYE.mount: Succeeded. Feb 23 15:58:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00178|connmgr|INFO|br-ex<->unix#421: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:59:01 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.gKZI6e.mount: Succeeded. Feb 23 15:59:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00179|connmgr|INFO|br-ex<->unix#425: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:59:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00180|connmgr|INFO|br-ex<->unix#434: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:59:25 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00181|connmgr|INFO|br-int<->unix#2: 2 flow_mods in the 14 s starting 46 s ago (1 adds, 1 deletes) Feb 23 15:59:26 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.89zDcx.mount: Succeeded. Feb 23 15:59:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00182|connmgr|INFO|br-ex<->unix#438: 2 flow_mods in the last 0 s (2 adds) Feb 23 15:59:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00183|connmgr|INFO|br-ex<->unix#447: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:00:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00184|connmgr|INFO|br-ex<->unix#451: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:00:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00185|connmgr|INFO|br-ex<->unix#460: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:00:25 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00186|connmgr|INFO|br-int<->unix#2: 8 flow_mods in the 5 s starting 25 s ago (4 adds, 4 deletes) Feb 23 16:00:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00187|connmgr|INFO|br-ex<->unix#464: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:00:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00188|connmgr|INFO|br-ex<->unix#473: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:01:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00189|connmgr|INFO|br-ex<->unix#477: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:01:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00190|connmgr|INFO|br-ex<->unix#486: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:01:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.Isgssk.mount: Succeeded. Feb 23 16:01:36 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.zNbDMs.mount: Succeeded. Feb 23 16:01:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00191|connmgr|INFO|br-ex<->unix#490: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:01:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00192|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 16:01:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00193|connmgr|INFO|br-ex<->unix#499: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:01:56 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.aNT2pC.mount: Succeeded. Feb 23 16:02:01 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.erHy8H.mount: Succeeded. Feb 23 16:02:06 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.433xbF.mount: Succeeded. Feb 23 16:02:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00194|connmgr|INFO|br-ex<->unix#503: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:02:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00195|connmgr|INFO|br-ex<->unix#512: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:02:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00196|connmgr|INFO|br-ex<->unix#516: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:02:40 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00197|connmgr|INFO|br-int<->unix#2: 1 flow_mods 56 s ago (1 deletes) Feb 23 16:02:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:02:42.316708253Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=96875a73-d723-4aed-9d01-959135a731be name=/runtime.v1.ImageService/ImageStatus Feb 23 16:02:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:02:42.316868835Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=96875a73-d723-4aed-9d01-959135a731be name=/runtime.v1.ImageService/ImageStatus Feb 23 16:02:46 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.jKlnon.mount: Succeeded. Feb 23 16:02:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00198|connmgr|INFO|br-ex<->unix#525: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:03:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00199|connmgr|INFO|br-ex<->unix#529: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:03:11 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.IjQkUf.mount: Succeeded. Feb 23 16:03:16 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.9zzP8p.mount: Succeeded. Feb 23 16:03:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00200|connmgr|INFO|br-ex<->unix#539: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:03:36 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.6QWZhc.mount: Succeeded. Feb 23 16:03:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00201|connmgr|INFO|br-ex<->unix#543: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:03:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00202|connmgr|INFO|br-ex<->unix#552: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:04:06 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.qynEU4.mount: Succeeded. Feb 23 16:04:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00203|connmgr|INFO|br-ex<->unix#556: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:04:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00204|connmgr|INFO|br-ex<->unix#565: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:04:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00205|connmgr|INFO|br-ex<->unix#569: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:04:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00206|connmgr|INFO|br-ex<->unix#578: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:05:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00207|connmgr|INFO|br-ex<->unix#582: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:05:16 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00208|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 16:05:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00209|connmgr|INFO|br-ex<->unix#591: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:05:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.VDdaAA.mount: Succeeded. Feb 23 16:05:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00210|connmgr|INFO|br-ex<->unix#595: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:05:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00211|connmgr|INFO|br-ex<->unix#604: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:06:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00212|connmgr|INFO|br-ex<->unix#608: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:06:16 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00213|connmgr|INFO|br-int<->unix#2: 1 flow_mods 56 s ago (1 deletes) Feb 23 16:06:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00214|connmgr|INFO|br-ex<->unix#617: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:06:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.XYlaLL.mount: Succeeded. Feb 23 16:06:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00215|connmgr|INFO|br-ex<->unix#621: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:06:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00216|connmgr|INFO|br-ex<->unix#630: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:07:01 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.aTL0CX.mount: Succeeded. Feb 23 16:07:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00217|connmgr|INFO|br-ex<->unix#634: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:07:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00218|connmgr|INFO|br-ex<->unix#643: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:07:26 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.tYSRi1.mount: Succeeded. Feb 23 16:07:29 ip-10-0-136-68 systemd[1]: run-runc-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3-runc.d8bZUC.mount: Succeeded. Feb 23 16:07:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00219|connmgr|INFO|br-ex<->unix#647: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:07:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:07:42.320175240Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=f4a13e5b-38d4-49a5-924e-7b53be5e0f03 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:07:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:07:42.320394554Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f4a13e5b-38d4-49a5-924e-7b53be5e0f03 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:07:49 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00220|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 16:07:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00221|connmgr|INFO|br-ex<->unix#656: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:08:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00222|connmgr|INFO|br-ex<->unix#660: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:08:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00223|connmgr|INFO|br-ex<->unix#670: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:08:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.4wa34n.mount: Succeeded. Feb 23 16:08:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00224|connmgr|INFO|br-ex<->unix#674: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:08:41 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.yrGD5e.mount: Succeeded. Feb 23 16:08:49 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00225|connmgr|INFO|br-int<->unix#2: 1 flow_mods 56 s ago (1 deletes) Feb 23 16:08:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00226|connmgr|INFO|br-ex<->unix#683: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:09:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00227|connmgr|INFO|br-ex<->unix#687: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:09:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00228|connmgr|INFO|br-ex<->unix#696: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:09:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00229|connmgr|INFO|br-ex<->unix#700: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:09:49 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00230|connmgr|INFO|br-int<->unix#2: 2 flow_mods in the 14 s starting 34 s ago (1 adds, 1 deletes) Feb 23 16:09:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00231|connmgr|INFO|br-ex<->unix#709: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:10:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00232|connmgr|INFO|br-ex<->unix#713: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:10:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00233|connmgr|INFO|br-ex<->unix#722: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:10:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00234|connmgr|INFO|br-ex<->unix#726: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:10:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00235|connmgr|INFO|br-ex<->unix#735: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:11:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00236|connmgr|INFO|br-ex<->unix#739: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:11:16 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.SRBoaW.mount: Succeeded. Feb 23 16:11:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00237|connmgr|INFO|br-ex<->unix#748: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:11:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00238|connmgr|INFO|br-ex<->unix#752: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:11:49 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00239|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 16:11:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00240|connmgr|INFO|br-ex<->unix#761: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:12:01 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.w7cT9A.mount: Succeeded. Feb 23 16:12:06 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.d1NzSj.mount: Succeeded. Feb 23 16:12:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00241|connmgr|INFO|br-ex<->unix#765: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:12:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00242|connmgr|INFO|br-ex<->unix#774: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:12:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00243|connmgr|INFO|br-ex<->unix#778: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:12:40 ip-10-0-136-68 NetworkManager[1149]: [1677168760.6313] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 16:12:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:12:42.322673942Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=814e4a9f-1899-41e0-ad35-c331f8fbdd8d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:12:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:12:42.322840786Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=814e4a9f-1899-41e0-ad35-c331f8fbdd8d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:12:49 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00244|connmgr|INFO|br-int<->unix#2: 1 flow_mods 55 s ago (1 deletes) Feb 23 16:12:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00245|connmgr|INFO|br-ex<->unix#787: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:13:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00246|connmgr|INFO|br-ex<->unix#791: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:13:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00247|connmgr|INFO|br-ex<->unix#801: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:13:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00248|connmgr|INFO|br-ex<->unix#805: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:13:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00249|connmgr|INFO|br-ex<->unix#814: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:14:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00250|connmgr|INFO|br-ex<->unix#818: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:14:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00251|connmgr|INFO|br-ex<->unix#827: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:14:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00252|connmgr|INFO|br-ex<->unix#831: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:14:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00253|connmgr|INFO|br-ex<->unix#840: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:15:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00254|connmgr|INFO|br-ex<->unix#844: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:15:10 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00255|connmgr|INFO|br-int<->unix#2: 8 flow_mods in the 4 s starting 10 s ago (4 adds, 4 deletes) Feb 23 16:15:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00256|connmgr|INFO|br-ex<->unix#853: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:15:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00257|connmgr|INFO|br-ex<->unix#857: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:15:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00258|connmgr|INFO|br-ex<->unix#866: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:16:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00259|connmgr|INFO|br-ex<->unix#870: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:16:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00260|connmgr|INFO|br-ex<->unix#879: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:16:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.QK2G77.mount: Succeeded. Feb 23 16:16:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00261|connmgr|INFO|br-ex<->unix#883: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:16:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00262|connmgr|INFO|br-ex<->unix#892: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:17:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00263|connmgr|INFO|br-ex<->unix#896: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:17:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00264|connmgr|INFO|br-ex<->unix#905: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:17:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00265|connmgr|INFO|br-ex<->unix#909: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:17:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:17:42.325158414Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=57203be6-a78f-4e99-8f47-9f23874b364b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:17:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:17:42.325369134Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=57203be6-a78f-4e99-8f47-9f23874b364b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:17:46 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.Dcq1lf.mount: Succeeded. Feb 23 16:17:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00266|connmgr|INFO|br-ex<->unix#918: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:18:01 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.9EYlgn.mount: Succeeded. Feb 23 16:18:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00267|connmgr|INFO|br-ex<->unix#922: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:18:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00268|connmgr|INFO|br-ex<->unix#932: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:18:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00269|connmgr|INFO|br-ex<->unix#936: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:18:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00270|connmgr|INFO|br-ex<->unix#945: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:18:58 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00271|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 16:19:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00272|connmgr|INFO|br-ex<->unix#949: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:19:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00273|connmgr|INFO|br-ex<->unix#958: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:19:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00274|connmgr|INFO|br-ex<->unix#962: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:19:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00275|connmgr|INFO|br-ex<->unix#971: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:19:58 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00276|connmgr|INFO|br-int<->unix#2: 5 flow_mods in the 55 s starting 56 s ago (2 adds, 3 deletes) Feb 23 16:20:01 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.4zsrqD.mount: Succeeded. Feb 23 16:20:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00277|connmgr|INFO|br-ex<->unix#975: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:20:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00278|connmgr|INFO|br-ex<->unix#984: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:20:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00279|connmgr|INFO|br-ex<->unix#988: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:20:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00280|connmgr|INFO|br-ex<->unix#997: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:21:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00281|connmgr|INFO|br-ex<->unix#1001: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:21:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00282|connmgr|INFO|br-ex<->unix#1010: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:21:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.FC0jMd.mount: Succeeded. Feb 23 16:21:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00283|connmgr|INFO|br-ex<->unix#1014: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:21:41 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.x8md6F.mount: Succeeded. Feb 23 16:21:46 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.2D8ezp.mount: Succeeded. Feb 23 16:21:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00284|connmgr|INFO|br-ex<->unix#1023: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:22:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00285|connmgr|INFO|br-ex<->unix#1027: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:22:14 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00286|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 16:22:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00287|connmgr|INFO|br-ex<->unix#1036: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:22:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00288|connmgr|INFO|br-ex<->unix#1040: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:22:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:22:42.328069743Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=8d17cb41-8422-4077-98cc-9fec3666c216 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:22:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:22:42.328252179Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8d17cb41-8422-4077-98cc-9fec3666c216 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:22:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00289|connmgr|INFO|br-ex<->unix#1049: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:23:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00290|connmgr|INFO|br-ex<->unix#1053: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:23:14 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00291|connmgr|INFO|br-int<->unix#2: 1 flow_mods 56 s ago (1 deletes) Feb 23 16:23:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00292|connmgr|INFO|br-ex<->unix#1063: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:23:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.2ev42t.mount: Succeeded. Feb 23 16:23:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00293|connmgr|INFO|br-ex<->unix#1067: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:23:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00294|connmgr|INFO|br-ex<->unix#1076: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:24:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00295|connmgr|INFO|br-ex<->unix#1080: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:24:16 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.p1xYrO.mount: Succeeded. Feb 23 16:24:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00296|connmgr|INFO|br-ex<->unix#1089: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:24:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00297|connmgr|INFO|br-ex<->unix#1093: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:24:41 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.ozwY3Z.mount: Succeeded. Feb 23 16:24:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00298|connmgr|INFO|br-ex<->unix#1102: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:25:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00299|connmgr|INFO|br-ex<->unix#1106: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:25:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00300|connmgr|INFO|br-ex<->unix#1115: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:25:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00301|connmgr|INFO|br-ex<->unix#1119: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:25:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00302|connmgr|INFO|br-ex<->unix#1128: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:26:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00303|connmgr|INFO|br-ex<->unix#1132: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:26:16 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.2pcZS2.mount: Succeeded. Feb 23 16:26:21 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.4SPdAG.mount: Succeeded. Feb 23 16:26:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00304|connmgr|INFO|br-ex<->unix#1141: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:26:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00305|connmgr|INFO|br-ex<->unix#1145: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:26:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00306|connmgr|INFO|br-ex<->unix#1154: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:26:56 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.1Er2sL.mount: Succeeded. Feb 23 16:27:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00307|connmgr|INFO|br-ex<->unix#1158: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:27:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00308|connmgr|INFO|br-ex<->unix#1167: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:27:36 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.ZAcykB.mount: Succeeded. Feb 23 16:27:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00309|connmgr|INFO|br-ex<->unix#1171: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:27:41 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.ZWQ3aO.mount: Succeeded. Feb 23 16:27:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:27:42.330810920Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=8bf42f40-57c3-485c-a859-51c0f0321b70 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:27:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:27:42.330989693Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8bf42f40-57c3-485c-a859-51c0f0321b70 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:27:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00310|connmgr|INFO|br-ex<->unix#1180: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:28:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00311|connmgr|INFO|br-ex<->unix#1184: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:28:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00312|connmgr|INFO|br-ex<->unix#1194: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:28:31 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.vauQvJ.mount: Succeeded. Feb 23 16:28:36 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.ZPM33Q.mount: Succeeded. Feb 23 16:28:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00313|connmgr|INFO|br-ex<->unix#1198: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:28:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00314|connmgr|INFO|br-ex<->unix#1207: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:29:06 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.sYSOz2.mount: Succeeded. Feb 23 16:29:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00315|connmgr|INFO|br-ex<->unix#1211: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:29:16 ip-10-0-136-68 systemd[1]: run-runc-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9-runc.jHRBVe.mount: Succeeded. Feb 23 16:29:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00316|connmgr|INFO|br-ex<->unix#1220: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:29:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00317|connmgr|INFO|br-ex<->unix#1224: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:29:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00318|connmgr|INFO|br-ex<->unix#1233: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:30:01 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 16:30:01 ip-10-0-136-68 rpm-ostree[62359]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 16:30:01 ip-10-0-136-68 rpm-ostree[62359]: In idle state; will auto-exit in 60 seconds Feb 23 16:30:01 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 16:30:01 ip-10-0-136-68 rpm-ostree[62359]: client(id:machine-config-operator dbus:1.311 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) added; new total=1 Feb 23 16:30:01 ip-10-0-136-68 rpm-ostree[62359]: client(id:machine-config-operator dbus:1.311 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) vanished; remaining=0 Feb 23 16:30:01 ip-10-0-136-68 rpm-ostree[62359]: In idle state; will auto-exit in 64 seconds Feb 23 16:30:01 ip-10-0-136-68 root[62374]: machine-config-daemon[2269]: Starting update from rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138 to rendered-worker-897f2f3c67d20d57713bd47f68251b36: &{osUpdate:false kargs:false fips:false passwd:false files:false units:false kernelType:true extensions:false} Feb 23 16:30:01 ip-10-0-136-68 root[62375]: machine-config-daemon[2269]: Update prepared; requesting cordon and drain via annotation to controller Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.060771 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm] Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.060950 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" podUID=27c4fe09-e4f7-452d-9364-2daec20710ff containerName="prometheus-operator-admission-webhook" containerID="cri-o://fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35" gracePeriod=30 Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.061397701Z" level=info msg="Stopping container: fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35 (timeout: 30s)" id=b8294bcc-030d-4dd3-b9d5-b8143bc3e5eb name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.068935 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/telemeter-client-675d948766-44b26] Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.069112 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-675d948766-44b26" podUID=38f8ec67-c68b-4783-9d06-95eb33506398 containerName="telemeter-client" containerID="cri-o://b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da" gracePeriod=30 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.069204 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-675d948766-44b26" podUID=38f8ec67-c68b-4783-9d06-95eb33506398 containerName="kube-rbac-proxy" containerID="cri-o://94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e" gracePeriod=30 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.069245 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-675d948766-44b26" podUID=38f8ec67-c68b-4783-9d06-95eb33506398 containerName="reload" containerID="cri-o://29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152" gracePeriod=30 Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.069465854Z" level=info msg="Stopping container: 29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152 (timeout: 30s)" id=b2342af9-f2a3-4e15-820e-082f3e2211f3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.069646693Z" level=info msg="Stopping container: b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da (timeout: 30s)" id=7834a135-e6af-4cec-90b4-9217ee0c064f name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.069765502Z" level=info msg="Stopping container: 94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e (timeout: 30s)" id=cf8742c1-af57-4658-8a08-164cf3327748 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.078464 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-f9wqq] Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35.scope: Consumed 3.238s CPU time Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.084357 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-892l6] Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.084567 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" podUID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerName="thanos-query" containerID="cri-o://9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8" gracePeriod=120 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.084666 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" podUID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerName="kube-rbac-proxy-metrics" containerID="cri-o://a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec" gracePeriod=120 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.084709 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" podUID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerName="kube-rbac-proxy-rules" containerID="cri-o://ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833" gracePeriod=120 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.084752 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" podUID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerName="prom-label-proxy" containerID="cri-o://6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79" gracePeriod=120 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.084792 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" podUID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerName="kube-rbac-proxy" containerID="cri-o://d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064" gracePeriod=120 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.084857 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" podUID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerName="oauth-proxy" containerID="cri-o://bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5" gracePeriod=120 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.086866 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.087099 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerName="prometheus" containerID="cri-o://ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9" gracePeriod=600 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.087217 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerName="kube-rbac-proxy-thanos" containerID="cri-o://6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0" gracePeriod=600 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.087269 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerName="kube-rbac-proxy" containerID="cri-o://f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4" gracePeriod=600 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.087332 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerName="prometheus-proxy" containerID="cri-o://e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b" gracePeriod=600 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.087378 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerName="thanos-sidecar" containerID="cri-o://0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df" gracePeriod=600 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.087419 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerName="config-reloader" containerID="cri-o://84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd" gracePeriod=600 Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35.scope: Consumed 20ms CPU time Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.094707362Z" level=info msg="Stopping container: 9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8 (timeout: 120s)" id=eb0925aa-01d6-472a-989f-bbd13a48a506 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.094777515Z" level=info msg="Stopping container: 84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd (timeout: 600s)" id=84251bd0-26de-49f2-a49c-d80427eb1038 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.094962106Z" level=info msg="Stopping container: bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5 (timeout: 120s)" id=b89e5369-6fde-47bc-8f33-7978f11c8c2d name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.095041052Z" level=info msg="Stopping container: a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec (timeout: 120s)" id=6074808d-7104-4372-95bb-c31c518c7049 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.095123281Z" level=info msg="Stopping container: ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833 (timeout: 120s)" id=e935e69b-3df9-47f5-82a0-af3dde7ba3ff name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.095193377Z" level=info msg="Stopping container: 6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79 (timeout: 120s)" id=a9dc5c11-9c52-453a-b4af-9d8fcc6afdf3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.095257603Z" level=info msg="Stopping container: d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064 (timeout: 120s)" id=5b5653fe-e89e-42ae-8a81-7741f05389b6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.095391911Z" level=info msg="Stopping container: f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4 (timeout: 600s)" id=1bd68f29-3e0a-4079-a260-537060a16a14 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.095455794Z" level=info msg="Stopping container: ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9 (timeout: 600s)" id=9bf787fb-de27-4093-b5ad-00891da5e550 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.095807331Z" level=info msg="Stopping container: 6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0 (timeout: 600s)" id=362149d9-76a9-4805-bcdc-ceccf77a2aee name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.096916179Z" level=info msg="Stopping container: 0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df (timeout: 600s)" id=b4e1dda2-79c9-4ace-a5bf-28806d7f1a2d name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.097101931Z" level=info msg="Stopping container: e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b (timeout: 600s)" id=5d474411-704d-4771-b99c-0ecf6493e95c name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 conmon[5619]: conmon b281e29e0f86a35608fd : container 5631 exited with status 2 Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-runc-acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea-runc.jjZw67.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da.scope: Consumed 21ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da.scope: Consumed 513ms CPU time Feb 23 16:30:05 ip-10-0-136-68 conmon[5952]: conmon 29314d4a1c76968c319f : container 5964 exited with status 2 Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 conmon[8385]: conmon 84be385df2fdf41e3622 : container 8399 exited with status 2 Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152.scope: Consumed 69ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152.scope: Consumed 20ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd.scope: Consumed 288ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd.scope: Consumed 20ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833.scope: Consumed 115ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8.scope: Consumed 1.566s CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8.scope: Consumed 22ms CPU time Feb 23 16:30:05 ip-10-0-136-68 conmon[5854]: conmon bbf1306bcd9ff05a3723 : container 5869 exited with status 2 Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79.scope: Consumed 43ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5.scope: Consumed 5.020s CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064.scope: Consumed 116ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79.scope: Consumed 20ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833.scope: Consumed 19ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064.scope: Consumed 20ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5.scope: Consumed 21ms CPU time Feb 23 16:30:05 ip-10-0-136-68 conmon[8487]: conmon e0786e1c591679de011e : container 8500 exited with status 2 Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b.scope: Consumed 20ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b.scope: Consumed 5.482s CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df.scope: Consumed 3.124s CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cf9d91e0d2d5ceb8336299f13a9a2bf23927fd74baf685856811610d76be08c2-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cf9d91e0d2d5ceb8336299f13a9a2bf23927fd74baf685856811610d76be08c2-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9.scope: Consumed 3min 12.518s CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df.scope: Consumed 21ms CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9.scope: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: crio-conmon-ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9.scope: Consumed 22ms CPU time Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.255112845Z" level=info msg="Stopped container b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da: openshift-monitoring/telemeter-client-675d948766-44b26/telemeter-client" id=7834a135-e6af-4cec-90b4-9217ee0c064f name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.268932975Z" level=info msg="Stopped container fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm/prometheus-operator-admission-webhook" id=b8294bcc-030d-4dd3-b9d5-b8143bc3e5eb name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.269237665Z" level=info msg="Stopping pod sandbox: 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8" id=c2b99377-37f5-4e22-a0e8-78a20f0cef54 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.269434864Z" level=info msg="Got pod network &{Name:prometheus-operator-admission-webhook-6854f48657-9dfhm Namespace:openshift-monitoring ID:9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8 UID:27c4fe09-e4f7-452d-9364-2daec20710ff NetNS:/var/run/netns/18443f28-f254-4391-ad17-a04b8bf831a6 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.269558719Z" level=info msg="Deleting pod openshift-monitoring_prometheus-operator-admission-webhook-6854f48657-9dfhm from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.295987082Z" level=info msg="Stopped container 29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152: openshift-monitoring/telemeter-client-675d948766-44b26/reload" id=b2342af9-f2a3-4e15-820e-082f3e2211f3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.309630904Z" level=info msg="Stopped container 84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd: openshift-monitoring/prometheus-k8s-0/config-reloader" id=84251bd0-26de-49f2-a49c-d80427eb1038 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.333006883Z" level=info msg="Stopped container bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5: openshift-monitoring/thanos-querier-8654d9f96d-892l6/oauth-proxy" id=b89e5369-6fde-47bc-8f33-7978f11c8c2d name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.348579889Z" level=info msg="Stopped container 6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79: openshift-monitoring/thanos-querier-8654d9f96d-892l6/prom-label-proxy" id=a9dc5c11-9c52-453a-b4af-9d8fcc6afdf3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.373277061Z" level=info msg="Stopped container 9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8: openshift-monitoring/thanos-querier-8654d9f96d-892l6/thanos-query" id=eb0925aa-01d6-472a-989f-bbd13a48a506 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.404561782Z" level=info msg="Stopped container ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-rules" id=e935e69b-3df9-47f5-82a0-af3dde7ba3ff name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.435543 2125 generic.go:296] "Generic (PLEG): container finished" podID=38f8ec67-c68b-4783-9d06-95eb33506398 containerID="29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152" exitCode=2 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.435568 2125 generic.go:296] "Generic (PLEG): container finished" podID=38f8ec67-c68b-4783-9d06-95eb33506398 containerID="b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da" exitCode=2 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.435610 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-675d948766-44b26" event=&{ID:38f8ec67-c68b-4783-9d06-95eb33506398 Type:ContainerDied Data:29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.435645 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-675d948766-44b26" event=&{ID:38f8ec67-c68b-4783-9d06-95eb33506398 Type:ContainerDied Data:b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.436905 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_44953449-e4f6-497a-b6bf-73fbdc9381b7/prometheus-proxy/0.log" Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437427 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_44953449-e4f6-497a-b6bf-73fbdc9381b7/config-reloader/0.log" Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437895 2125 generic.go:296] "Generic (PLEG): container finished" podID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerID="e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b" exitCode=2 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437914 2125 generic.go:296] "Generic (PLEG): container finished" podID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerID="0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df" exitCode=0 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437920 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerDied Data:e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437928 2125 generic.go:296] "Generic (PLEG): container finished" podID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerID="84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd" exitCode=2 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437942 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerDied Data:0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437941 2125 generic.go:296] "Generic (PLEG): container finished" podID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerID="ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9" exitCode=0 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437955 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerDied Data:84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.437968 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerDied Data:ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.438716 2125 generic.go:296] "Generic (PLEG): container finished" podID=27c4fe09-e4f7-452d-9364-2daec20710ff containerID="fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35" exitCode=0 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.438762 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" event=&{ID:27c4fe09-e4f7-452d-9364-2daec20710ff Type:ContainerDied Data:fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.440625 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8654d9f96d-892l6_b64af5e5-e41c-4886-a88b-39556a3f4b21/oauth-proxy/0.log" Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.440969 2125 generic.go:296] "Generic (PLEG): container finished" podID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerID="ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833" exitCode=0 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.440985 2125 generic.go:296] "Generic (PLEG): container finished" podID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerID="6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79" exitCode=0 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.440992 2125 generic.go:296] "Generic (PLEG): container finished" podID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerID="d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064" exitCode=0 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.441000 2125 generic.go:296] "Generic (PLEG): container finished" podID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerID="bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5" exitCode=2 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.441012 2125 generic.go:296] "Generic (PLEG): container finished" podID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerID="9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8" exitCode=0 Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.441026 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerDied Data:ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.441039 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerDied Data:6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.441048 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerDied Data:d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.441055 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerDied Data:bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5} Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.441062 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerDied Data:9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8} Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.442115437Z" level=info msg="Stopped container d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy" id=5b5653fe-e89e-42ae-8a81-7741f05389b6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00319|bridge|INFO|bridge br-int: deleted interface 9cc61114cb7d291 on port 12 Feb 23 16:30:05 ip-10-0-136-68 kernel: device 9cc61114cb7d291 left promiscuous mode Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.555051690Z" level=info msg="Stopped container e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b: openshift-monitoring/prometheus-k8s-0/prometheus-proxy" id=5d474411-704d-4771-b99c-0ecf6493e95c name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: 2023-02-23T16:30:05Z [verbose] Del: openshift-monitoring:prometheus-operator-admission-webhook-6854f48657-9dfhm:27c4fe09-e4f7-452d-9364-2daec20710ff:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: I0223 16:30:05.438732 62780 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.617895820Z" level=info msg="Stopped container ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9: openshift-monitoring/prometheus-k8s-0/prometheus" id=9bf787fb-de27-4093-b5ad-00891da5e550 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.635226500Z" level=info msg="Stopped container 0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df: openshift-monitoring/prometheus-k8s-0/thanos-sidecar" id=b4e1dda2-79c9-4ace-a5bf-28806d7f1a2d name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:05 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:05.692466633Z" level=info msg="Stopped pod sandbox: 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8" id=c2b99377-37f5-4e22-a0e8-78a20f0cef54 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.816603 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27c4fe09-e4f7-452d-9364-2daec20710ff-tls-certificates\") pod \"27c4fe09-e4f7-452d-9364-2daec20710ff\" (UID: \"27c4fe09-e4f7-452d-9364-2daec20710ff\") " Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.826571 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c4fe09-e4f7-452d-9364-2daec20710ff-tls-certificates" (OuterVolumeSpecName: "tls-certificates") pod "27c4fe09-e4f7-452d-9364-2daec20710ff" (UID: "27c4fe09-e4f7-452d-9364-2daec20710ff"). InnerVolumeSpecName "tls-certificates". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7a16757ba5672307bf4eb80a6ef08f57b251f1195cae5494005a217357370559-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7a16757ba5672307bf4eb80a6ef08f57b251f1195cae5494005a217357370559-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-06ff5897116ca5e4569404be2da043dc034024bc755bc1db25a16bea9e8f771e-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-06ff5897116ca5e4569404be2da043dc034024bc755bc1db25a16bea9e8f771e-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5091c510c37327fd798991d6d0de81dc59f050d22d546cf08658ba9ba958ea54-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5091c510c37327fd798991d6d0de81dc59f050d22d546cf08658ba9ba958ea54-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-940cb53df1fd7517ed42b64994b9102cd9909f1ca43d374f9de0f30ccb4dc524-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-940cb53df1fd7517ed42b64994b9102cd9909f1ca43d374f9de0f30ccb4dc524-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7d238ff14b91218e81589b8fce34f95b57f038de4bab3d431b5bf8901263e9f4-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7d238ff14b91218e81589b8fce34f95b57f038de4bab3d431b5bf8901263e9f4-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4aa5361d5800ebdaaacac6317fed78221111d7919291d57bd03a96c6235be609-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4aa5361d5800ebdaaacac6317fed78221111d7919291d57bd03a96c6235be609-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-44a159a1507efa197efbb1c78a60222baafc689503e87b42619792fcddcc0d13-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-44a159a1507efa197efbb1c78a60222baafc689503e87b42619792fcddcc0d13-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7ab8d61819928686ac4d33adcc555b3041c0cf4cb12b2fbf4094f7638050bc79-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7ab8d61819928686ac4d33adcc555b3041c0cf4cb12b2fbf4094f7638050bc79-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-41daa727c639c264cfc48af676134ce30978e3a0c8d56575310501ff696c0335-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-41daa727c639c264cfc48af676134ce30978e3a0c8d56575310501ff696c0335-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9604b0085b61c40731337cacb3d583e839fd4fba774ce9ac3c8d2e11d3152538-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9604b0085b61c40731337cacb3d583e839fd4fba774ce9ac3c8d2e11d3152538-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cec846d90bd08d8ef1d036be95c0e6875f9b561386418fefde712db819255e3e-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cec846d90bd08d8ef1d036be95c0e6875f9b561386418fefde712db819255e3e-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0c22a8bce9ed1cdd463269bad3fba739d3260a9eb05805c1a45c957b71cdf094-merged.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0c22a8bce9ed1cdd463269bad3fba739d3260a9eb05805c1a45c957b71cdf094-merged.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-netns-18443f28\x2df254\x2d4391\x2dad17\x2da04b8bf831a6.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-netns-18443f28\x2df254\x2d4391\x2dad17\x2da04b8bf831a6.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-ipcns-18443f28\x2df254\x2d4391\x2dad17\x2da04b8bf831a6.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-ipcns-18443f28\x2df254\x2d4391\x2dad17\x2da04b8bf831a6.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-utsns-18443f28\x2df254\x2d4391\x2dad17\x2da04b8bf831a6.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-utsns-18443f28\x2df254\x2d4391\x2dad17\x2da04b8bf831a6.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8-userdata-shm.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8-userdata-shm.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-27c4fe09\x2de4f7\x2d452d\x2d9364\x2d2daec20710ff-volumes-kubernetes.io\x7esecret-tls\x2dcertificates.mount: Succeeded. Feb 23 16:30:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-27c4fe09\x2de4f7\x2d452d\x2d9364\x2d2daec20710ff-volumes-kubernetes.io\x7esecret-tls\x2dcertificates.mount: Consumed 0 CPU time Feb 23 16:30:05 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:05.917596 2125 reconciler.go:399] "Volume detached for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27c4fe09-e4f7-452d-9364-2daec20710ff-tls-certificates\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: E0223 16:30:06.004419 2125 remote_runtime.go:734] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9 is running failed: container process not found" containerID="ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9" cmd=[sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: E0223 16:30:06.004679 2125 remote_runtime.go:734] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9 is running failed: container process not found" containerID="ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9" cmd=[sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: E0223 16:30:06.004885 2125 remote_runtime.go:734] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9 is running failed: container process not found" containerID="ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9" cmd=[sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: E0223 16:30:06.004914 2125 prober.go:111] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9 is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerName="prometheus" Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e.scope: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e.scope: Consumed 689ms CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-conmon-94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e.scope: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-conmon-94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e.scope: Consumed 20ms CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0.scope: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0.scope: Consumed 665ms CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-conmon-6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0.scope: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-conmon-6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0.scope: Consumed 20ms CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec.scope: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec.scope: Consumed 722ms CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-conmon-a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec.scope: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-conmon-a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec.scope: Consumed 20ms CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-843b4c6d6ee756dac0a6fcf628972cd270b9ce02ad518312655318127c2928e7-merged.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-843b4c6d6ee756dac0a6fcf628972cd270b9ce02ad518312655318127c2928e7-merged.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.235124842Z" level=info msg="Stopped container 94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e: openshift-monitoring/telemeter-client-675d948766-44b26/kube-rbac-proxy" id=cf8742c1-af57-4658-8a08-164cf3327748 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.235495881Z" level=info msg="Stopping pod sandbox: 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6" id=f6f090ce-1e0c-4542-b9f7-fbe7feaaa358 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.235676542Z" level=info msg="Got pod network &{Name:telemeter-client-675d948766-44b26 Namespace:openshift-monitoring ID:371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6 UID:38f8ec67-c68b-4783-9d06-95eb33506398 NetNS:/var/run/netns/1e8991bd-00bf-4b0d-9875-34d62bb269d4 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.235797976Z" level=info msg="Deleting pod openshift-monitoring_telemeter-client-675d948766-44b26 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-98235e6988eae241423d86b8d2d4fd7ed4a0b6aae73158dce26277b9a3b0b966-merged.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-98235e6988eae241423d86b8d2d4fd7ed4a0b6aae73158dce26277b9a3b0b966-merged.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.269164455Z" level=info msg="Stopped container 6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos" id=362149d9-76a9-4805-bcdc-ceccf77a2aee name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-bd132e7b42ee82ace56d2999bc7a595b2ffde3784d0060a33992eac32ff0780b-merged.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-bd132e7b42ee82ace56d2999bc7a595b2ffde3784d0060a33992eac32ff0780b-merged.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.280769690Z" level=info msg="Stopped container a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-metrics" id=6074808d-7104-4372-95bb-c31c518c7049 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4.scope: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4.scope: Consumed 727ms CPU time Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.283321500Z" level=info msg="Stopping pod sandbox: 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1" id=fb2505f9-6e81-4d2c-9488-5ce34cd89af1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.283543415Z" level=info msg="Got pod network &{Name:thanos-querier-8654d9f96d-892l6 Namespace:openshift-monitoring ID:8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1 UID:b64af5e5-e41c-4886-a88b-39556a3f4b21 NetNS:/var/run/netns/aab5b03a-6d12-49fb-9628-2d412137c7fb Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.283687734Z" level=info msg="Deleting pod openshift-monitoring_thanos-querier-8654d9f96d-892l6 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-conmon-f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4.scope: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: crio-conmon-f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4.scope: Consumed 19ms CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4eb42f03dcb2508bcce6be9cc00e8c0cdcf1da689b31f4cacab609cd6b69f882-merged.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4eb42f03dcb2508bcce6be9cc00e8c0cdcf1da689b31f4cacab609cd6b69f882-merged.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.329643117Z" level=info msg="Stopped container f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy" id=1bd68f29-3e0a-4079-a260-537060a16a14 name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.330557460Z" level=info msg="Stopping pod sandbox: 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea" id=d6f65674-3e17-43b6-8691-534c61ab6af3 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.330730887Z" level=info msg="Got pod network &{Name:prometheus-k8s-0 Namespace:openshift-monitoring ID:7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea UID:44953449-e4f6-497a-b6bf-73fbdc9381b7 NetNS:/var/run/netns/2c559b52-3d31-49d8-80de-874e670a2653 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.330863007Z" level=info msg="Deleting pod openshift-monitoring_prometheus-k8s-0 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:30:06 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00320|bridge|INFO|bridge br-int: deleted interface 371d339c2a21dac on port 14 Feb 23 16:30:06 ip-10-0-136-68 kernel: device 371d339c2a21dac left promiscuous mode Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.479560 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm" event=&{ID:27c4fe09-e4f7-452d-9364-2daec20710ff Type:ContainerDied Data:9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8} Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.479598 2125 scope.go:115] "RemoveContainer" containerID="fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35" Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod27c4fe09_e4f7_452d_9364_2daec20710ff.slice. Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.486121213Z" level=info msg="Removing container: fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35" id=7ea6f98f-0448-426e-aac4-e855fc88970f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod27c4fe09_e4f7_452d_9364_2daec20710ff.slice: Consumed 3.258s CPU time Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.526899 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm] Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.531158 2125 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm] Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.543392 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8654d9f96d-892l6_b64af5e5-e41c-4886-a88b-39556a3f4b21/oauth-proxy/0.log" Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.543780 2125 generic.go:296] "Generic (PLEG): container finished" podID=b64af5e5-e41c-4886-a88b-39556a3f4b21 containerID="a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec" exitCode=0 Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.543855 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerDied Data:a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec} Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.550186 2125 generic.go:296] "Generic (PLEG): container finished" podID=38f8ec67-c68b-4783-9d06-95eb33506398 containerID="94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e" exitCode=0 Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.550251 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-675d948766-44b26" event=&{ID:38f8ec67-c68b-4783-9d06-95eb33506398 Type:ContainerDied Data:94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e} Feb 23 16:30:06 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00321|bridge|INFO|bridge br-int: deleted interface 8a6086c30905e97 on port 16 Feb 23 16:30:06 ip-10-0-136-68 kernel: device 8a6086c30905e97 left promiscuous mode Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.573546 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_44953449-e4f6-497a-b6bf-73fbdc9381b7/prometheus-proxy/0.log" Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.582060 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_44953449-e4f6-497a-b6bf-73fbdc9381b7/config-reloader/0.log" Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.587094 2125 generic.go:296] "Generic (PLEG): container finished" podID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerID="6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0" exitCode=0 Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.587114 2125 generic.go:296] "Generic (PLEG): container finished" podID=44953449-e4f6-497a-b6bf-73fbdc9381b7 containerID="f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4" exitCode=0 Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.587140 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerDied Data:6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0} Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.587162 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerDied Data:f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4} Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: 2023-02-23T16:30:06Z [verbose] Del: openshift-monitoring:telemeter-client-675d948766-44b26:38f8ec67-c68b-4783-9d06-95eb33506398:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: I0223 16:30:06.387324 62959 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:30:06 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00322|bridge|INFO|bridge br-int: deleted interface 7d9bb22d3d6b32a on port 18 Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.627190302Z" level=info msg="Removed container fd7ac301e9b79402445ce11600eb0d6f51163eca247e97b4c659e39f4b762c35: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-9dfhm/prometheus-operator-admission-webhook" id=7ea6f98f-0448-426e-aac4-e855fc88970f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:06 ip-10-0-136-68 kernel: device 7d9bb22d3d6b32a left promiscuous mode Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: 2023-02-23T16:30:06Z [verbose] Del: openshift-monitoring:thanos-querier-8654d9f96d-892l6:b64af5e5-e41c-4886-a88b-39556a3f4b21:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: I0223 16:30:06.454747 63000 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-85c7e45599d17cfd7408ccd998c3084dcde3161f7f6e9d467c89eeed537f0c61-merged.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-85c7e45599d17cfd7408ccd998c3084dcde3161f7f6e9d467c89eeed537f0c61-merged.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f26d7542408ab99f29fc710a230d76d524c502ce29040ac4479de8cb905c6be0-merged.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f26d7542408ab99f29fc710a230d76d524c502ce29040ac4479de8cb905c6be0-merged.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-ipcns-1e8991bd\x2d00bf\x2d4b0d\x2d9875\x2d34d62bb269d4.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-ipcns-1e8991bd\x2d00bf\x2d4b0d\x2d9875\x2d34d62bb269d4.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-utsns-1e8991bd\x2d00bf\x2d4b0d\x2d9875\x2d34d62bb269d4.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-utsns-1e8991bd\x2d00bf\x2d4b0d\x2d9875\x2d34d62bb269d4.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: 2023-02-23T16:30:06Z [verbose] Del: openshift-monitoring:prometheus-k8s-0:44953449-e4f6-497a-b6bf-73fbdc9381b7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: I0223 16:30:06.602941 63017 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-netns-1e8991bd\x2d00bf\x2d4b0d\x2d9875\x2d34d62bb269d4.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-netns-1e8991bd\x2d00bf\x2d4b0d\x2d9875\x2d34d62bb269d4.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-utsns-aab5b03a\x2d6d12\x2d49fb\x2d9628\x2d2d412137c7fb.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-utsns-aab5b03a\x2d6d12\x2d49fb\x2d9628\x2d2d412137c7fb.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-62215cf15ef6f3b393089d33e55056735688a7bc13cff468c45f163080fc7514-merged.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-62215cf15ef6f3b393089d33e55056735688a7bc13cff468c45f163080fc7514-merged.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6-userdata-shm.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6-userdata-shm.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.944459556Z" level=info msg="Stopped pod sandbox: 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6" id=f6f090ce-1e0c-4542-b9f7-fbe7feaaa358 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-ipcns-aab5b03a\x2d6d12\x2d49fb\x2d9628\x2d2d412137c7fb.mount: Succeeded. Feb 23 16:30:06 ip-10-0-136-68 systemd[1]: run-ipcns-aab5b03a\x2d6d12\x2d49fb\x2d9628\x2d2d412137c7fb.mount: Consumed 0 CPU time Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.971412584Z" level=info msg="Stopped pod sandbox: 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1" id=fb2505f9-6e81-4d2c-9488-5ce34cd89af1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.976968 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8654d9f96d-892l6_b64af5e5-e41c-4886-a88b-39556a3f4b21/oauth-proxy/0.log" Feb 23 16:30:06 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:06.987398177Z" level=info msg="Stopped pod sandbox: 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea" id=d6f65674-3e17-43b6-8691-534c61ab6af3 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.991991 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_44953449-e4f6-497a-b6bf-73fbdc9381b7/prometheus-proxy/0.log" Feb 23 16:30:06 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:06.992382 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_44953449-e4f6-497a-b6bf-73fbdc9381b7/config-reloader/0.log" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136241 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-client-tls\") pod \"38f8ec67-c68b-4783-9d06-95eb33506398\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136308 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-thanos-querier-trusted-ca-bundle\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136333 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-grpc-tls\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136357 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-trusted-ca-bundle\") pod \"38f8ec67-c68b-4783-9d06-95eb33506398\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136381 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-serving-certs-ca-bundle\") pod \"38f8ec67-c68b-4783-9d06-95eb33506398\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136408 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-metrics-client-ca\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136434 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-db\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136460 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-trusted-ca-bundle\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136486 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-oauth-cookie\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136512 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-serving-certs-ca-bundle\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136537 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-metrics-client-ca\") pod \"38f8ec67-c68b-4783-9d06-95eb33506398\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.136517 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/b64af5e5-e41c-4886-a88b-39556a3f4b21/volumes/kubernetes.io~configmap/thanos-querier-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136563 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mxtt\" (UniqueName: \"kubernetes.io/projected/38f8ec67-c68b-4783-9d06-95eb33506398-kube-api-access-2mxtt\") pod \"38f8ec67-c68b-4783-9d06-95eb33506398\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136587 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzg2g\" (UniqueName: \"kubernetes.io/projected/b64af5e5-e41c-4886-a88b-39556a3f4b21-kube-api-access-dzg2g\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136611 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-grpc-tls\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136638 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-rulefiles-0\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136663 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-web-config\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136687 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-kubelet-serving-ca-bundle\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136703 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-thanos-querier-trusted-ca-bundle" (OuterVolumeSpecName: "thanos-querier-trusted-ca-bundle") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "thanos-querier-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136717 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136767 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-config\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136798 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-tls-assets\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136827 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-tls\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136856 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-config-out\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136889 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136916 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-tls\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136945 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtqf6\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-kube-api-access-rtqf6\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136974 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client\") pod \"38f8ec67-c68b-4783-9d06-95eb33506398\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.136999 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-metrics-client-certs\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137031 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137060 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-etcd-client-certs\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137090 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-metrics-client-ca\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.137099 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/38f8ec67-c68b-4783-9d06-95eb33506398/volumes/kubernetes.io~configmap/telemeter-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137121 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"b64af5e5-e41c-4886-a88b-39556a3f4b21\" (UID: \"b64af5e5-e41c-4886-a88b-39556a3f4b21\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137149 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-rbac-proxy\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137178 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-metrics-client-ca\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137210 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client-kube-rbac-proxy-config\") pod \"38f8ec67-c68b-4783-9d06-95eb33506398\" (UID: \"38f8ec67-c68b-4783-9d06-95eb33506398\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137238 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-proxy\") pod \"44953449-e4f6-497a-b6bf-73fbdc9381b7\" (UID: \"44953449-e4f6-497a-b6bf-73fbdc9381b7\") " Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137264 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-trusted-ca-bundle" (OuterVolumeSpecName: "telemeter-trusted-ca-bundle") pod "38f8ec67-c68b-4783-9d06-95eb33506398" (UID: "38f8ec67-c68b-4783-9d06-95eb33506398"). InnerVolumeSpecName "telemeter-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137375 2125 reconciler.go:399] "Volume detached for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-thanos-querier-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137394 2125 reconciler.go:399] "Volume detached for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.137489 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/38f8ec67-c68b-4783-9d06-95eb33506398/volumes/kubernetes.io~configmap/serving-certs-ca-bundle: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.137603 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/38f8ec67-c68b-4783-9d06-95eb33506398/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137624 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-serving-certs-ca-bundle" (OuterVolumeSpecName: "serving-certs-ca-bundle") pod "38f8ec67-c68b-4783-9d06-95eb33506398" (UID: "38f8ec67-c68b-4783-9d06-95eb33506398"). InnerVolumeSpecName "serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.137762 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "38f8ec67-c68b-4783-9d06-95eb33506398" (UID: "38f8ec67-c68b-4783-9d06-95eb33506398"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.137997 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/b64af5e5-e41c-4886-a88b-39556a3f4b21/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.138125 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.138225 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes/kubernetes.io~empty-dir/prometheus-k8s-db: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.147648 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.147741 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes/kubernetes.io~configmap/prometheus-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.147848 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.150542 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes/kubernetes.io~empty-dir/config-out: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.150648 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-config-out" (OuterVolumeSpecName: "config-out") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.150668 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes/kubernetes.io~configmap/configmap-serving-certs-ca-bundle: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.150717 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes/kubernetes.io~configmap/prometheus-k8s-rulefiles-0: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.150819 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.151044 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes/kubernetes.io~configmap/configmap-metrics-client-ca: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.151194 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.151710 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.151858 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:07.151961 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes/kubernetes.io~configmap/configmap-kubelet-serving-ca-bundle: clearQuota called, but quotas disabled Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.152090 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.152505 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.152669 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-client-tls" (OuterVolumeSpecName: "telemeter-client-tls") pod "38f8ec67-c68b-4783-9d06-95eb33506398" (UID: "38f8ec67-c68b-4783-9d06-95eb33506398"). InnerVolumeSpecName "telemeter-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.152958 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64af5e5-e41c-4886-a88b-39556a3f4b21-kube-api-access-dzg2g" (OuterVolumeSpecName: "kube-api-access-dzg2g") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "kube-api-access-dzg2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.153712 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-tls" (OuterVolumeSpecName: "secret-thanos-querier-tls") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "secret-thanos-querier-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.155648 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy" (OuterVolumeSpecName: "secret-thanos-querier-kube-rbac-proxy") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "secret-thanos-querier-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.155709 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-oauth-cookie" (OuterVolumeSpecName: "secret-thanos-querier-oauth-cookie") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "secret-thanos-querier-oauth-cookie". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.158051 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.158328 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-rules" (OuterVolumeSpecName: "secret-thanos-querier-kube-rbac-proxy-rules") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "secret-thanos-querier-kube-rbac-proxy-rules". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160073 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-config" (OuterVolumeSpecName: "config") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160088 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160135 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-proxy" (OuterVolumeSpecName: "secret-prometheus-k8s-proxy") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "secret-prometheus-k8s-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160135 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160209 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160236 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client-kube-rbac-proxy-config" (OuterVolumeSpecName: "secret-telemeter-client-kube-rbac-proxy-config") pod "38f8ec67-c68b-4783-9d06-95eb33506398" (UID: "38f8ec67-c68b-4783-9d06-95eb33506398"). InnerVolumeSpecName "secret-telemeter-client-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160326 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f8ec67-c68b-4783-9d06-95eb33506398-kube-api-access-2mxtt" (OuterVolumeSpecName: "kube-api-access-2mxtt") pod "38f8ec67-c68b-4783-9d06-95eb33506398" (UID: "38f8ec67-c68b-4783-9d06-95eb33506398"). InnerVolumeSpecName "kube-api-access-2mxtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160653 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160688 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.160712 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-etcd-client-certs" (OuterVolumeSpecName: "secret-kube-etcd-client-certs") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "secret-kube-etcd-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.161536 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.162544 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-metrics" (OuterVolumeSpecName: "secret-thanos-querier-kube-rbac-proxy-metrics") pod "b64af5e5-e41c-4886-a88b-39556a3f4b21" (UID: "b64af5e5-e41c-4886-a88b-39556a3f4b21"). InnerVolumeSpecName "secret-thanos-querier-kube-rbac-proxy-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.162562 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-kube-api-access-rtqf6" (OuterVolumeSpecName: "kube-api-access-rtqf6") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "kube-api-access-rtqf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.163626 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client" (OuterVolumeSpecName: "secret-telemeter-client") pod "38f8ec67-c68b-4783-9d06-95eb33506398" (UID: "38f8ec67-c68b-4783-9d06-95eb33506398"). InnerVolumeSpecName "secret-telemeter-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.170521 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-web-config" (OuterVolumeSpecName: "web-config") pod "44953449-e4f6-497a-b6bf-73fbdc9381b7" (UID: "44953449-e4f6-497a-b6bf-73fbdc9381b7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238006 2125 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238038 2125 reconciler.go:399] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238046 2125 reconciler.go:399] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-tls-assets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238058 2125 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238066 2125 reconciler.go:399] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-config-out\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238075 2125 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238084 2125 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238092 2125 reconciler.go:399] "Volume detached for volume \"kube-api-access-rtqf6\" (UniqueName: \"kubernetes.io/projected/44953449-e4f6-497a-b6bf-73fbdc9381b7-kube-api-access-rtqf6\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238100 2125 reconciler.go:399] "Volume detached for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238108 2125 reconciler.go:399] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-metrics-client-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238117 2125 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-rules\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238127 2125 reconciler.go:399] "Volume detached for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-etcd-client-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238136 2125 reconciler.go:399] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238145 2125 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-kube-rbac-proxy-metrics\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238153 2125 reconciler.go:399] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238161 2125 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238170 2125 reconciler.go:399] "Volume detached for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-secret-telemeter-client-kube-rbac-proxy-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238178 2125 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-prometheus-k8s-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238186 2125 reconciler.go:399] "Volume detached for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/38f8ec67-c68b-4783-9d06-95eb33506398-telemeter-client-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238194 2125 reconciler.go:399] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-secret-grpc-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238202 2125 reconciler.go:399] "Volume detached for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-serving-certs-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238210 2125 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b64af5e5-e41c-4886-a88b-39556a3f4b21-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238217 2125 reconciler.go:399] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-db\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238225 2125 reconciler.go:399] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238234 2125 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-thanos-querier-oauth-cookie\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238242 2125 reconciler.go:399] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-serving-certs-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238250 2125 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/38f8ec67-c68b-4783-9d06-95eb33506398-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238258 2125 reconciler.go:399] "Volume detached for volume \"kube-api-access-2mxtt\" (UniqueName: \"kubernetes.io/projected/38f8ec67-c68b-4783-9d06-95eb33506398-kube-api-access-2mxtt\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238265 2125 reconciler.go:399] "Volume detached for volume \"kube-api-access-dzg2g\" (UniqueName: \"kubernetes.io/projected/b64af5e5-e41c-4886-a88b-39556a3f4b21-kube-api-access-dzg2g\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238273 2125 reconciler.go:399] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b64af5e5-e41c-4886-a88b-39556a3f4b21-secret-grpc-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238297 2125 reconciler.go:399] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-prometheus-k8s-rulefiles-0\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238309 2125 reconciler.go:399] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44953449-e4f6-497a-b6bf-73fbdc9381b7-web-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.238322 2125 reconciler.go:399] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44953449-e4f6-497a-b6bf-73fbdc9381b7-configmap-kubelet-serving-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.590413 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_44953449-e4f6-497a-b6bf-73fbdc9381b7/prometheus-proxy/0.log" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.590904 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_44953449-e4f6-497a-b6bf-73fbdc9381b7/config-reloader/0.log" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.591375 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event=&{ID:44953449-e4f6-497a-b6bf-73fbdc9381b7 Type:ContainerDied Data:7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea} Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.591409 2125 scope.go:115] "RemoveContainer" containerID="6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.592061525Z" level=info msg="Removing container: 6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0" id=9670d440-9faa-4606-a4db-ce1e350e48c2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.593639 2125 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8654d9f96d-892l6_b64af5e5-e41c-4886-a88b-39556a3f4b21/oauth-proxy/0.log" Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.593880 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-892l6" event=&{ID:b64af5e5-e41c-4886-a88b-39556a3f4b21 Type:ContainerDied Data:8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1} Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.600409 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-675d948766-44b26" event=&{ID:38f8ec67-c68b-4783-9d06-95eb33506398 Type:ContainerDied Data:371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6} Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod44953449_e4f6_497a_b6bf_73fbdc9381b7.slice. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod44953449_e4f6_497a_b6bf_73fbdc9381b7.slice: Consumed 3min 22.973s CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podb64af5e5_e41c_4886_a88b_39556a3f4b21.slice. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: kubepods-burstable-podb64af5e5_e41c_4886_a88b_39556a3f4b21.slice: Consumed 7.708s CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod38f8ec67_c68b_4783_9d06_95eb33506398.slice. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod38f8ec67_c68b_4783_9d06_95eb33506398.slice: Consumed 1.335s CPU time Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.621389609Z" level=info msg="Removed container 6fb9e075075963e536ffbe1ace0af85fd92436c790ae86c0801797b201e917a0: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy-thanos" id=9670d440-9faa-4606-a4db-ce1e350e48c2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.621533 2125 scope.go:115] "RemoveContainer" containerID="f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.622131549Z" level=info msg="Removing container: f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4" id=92400f4f-663e-4046-89f1-f23f3b186569 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.634442 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.637937656Z" level=info msg="Removed container f6ec70951d846db90a5f182543bbcee52a7a42dd83b37387fdcfa08f18d7c2d4: openshift-monitoring/prometheus-k8s-0/kube-rbac-proxy" id=92400f4f-663e-4046-89f1-f23f3b186569 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.638077 2125 scope.go:115] "RemoveContainer" containerID="e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.638774551Z" level=info msg="Removing container: e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b" id=df895b8c-4072-414a-b0ea-83fb15bcd470 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.645058 2125 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/prometheus-k8s-0] Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.653646914Z" level=info msg="Removed container e0786e1c591679de011e49dbda0b424701e11e394b89019bfe08d72435181f9b: openshift-monitoring/prometheus-k8s-0/prometheus-proxy" id=df895b8c-4072-414a-b0ea-83fb15bcd470 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.653807 2125 scope.go:115] "RemoveContainer" containerID="0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.654516662Z" level=info msg="Removing container: 0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df" id=4d63c1b9-0bfc-4a15-a7cc-784ea9e3033e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.668662807Z" level=info msg="Removed container 0ee72837a22915d2270060bc8bc01dcf5dfd455dcbb42126e1c67b30b3e100df: openshift-monitoring/prometheus-k8s-0/thanos-sidecar" id=4d63c1b9-0bfc-4a15-a7cc-784ea9e3033e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.668804 2125 scope.go:115] "RemoveContainer" containerID="84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.669376556Z" level=info msg="Removing container: 84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd" id=3fa54a61-eb6c-417d-b5f1-087205c51280 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.674856 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-892l6] Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.677620 2125 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-892l6] Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.682214963Z" level=info msg="Removed container 84be385df2fdf41e3622fffb41e65e3068ec0852f16638e9b6669515b7c875fd: openshift-monitoring/prometheus-k8s-0/config-reloader" id=3fa54a61-eb6c-417d-b5f1-087205c51280 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.682390 2125 scope.go:115] "RemoveContainer" containerID="ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.683149584Z" level=info msg="Removing container: ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9" id=b5cd5f90-9054-461a-bfd8-fa392ec90c25 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.698423017Z" level=info msg="Removed container ffde97064847971ca800dd10edb3527ece408ad8c7c88ba27cde311ab90bfbb9: openshift-monitoring/prometheus-k8s-0/prometheus" id=b5cd5f90-9054-461a-bfd8-fa392ec90c25 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.698564 2125 scope.go:115] "RemoveContainer" containerID="2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.699867259Z" level=info msg="Removing container: 2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485" id=5627efa7-9a2b-4b70-a60e-defe73f173bb name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.712187 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/telemeter-client-675d948766-44b26] Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.719252 2125 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/telemeter-client-675d948766-44b26] Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.724267393Z" level=info msg="Removed container 2bbfa625be42b3c089448f057ecd3391b8f4b834b3e68c9dade0c47820619485: openshift-monitoring/prometheus-k8s-0/init-config-reloader" id=5627efa7-9a2b-4b70-a60e-defe73f173bb name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.724456 2125 scope.go:115] "RemoveContainer" containerID="a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.724999574Z" level=info msg="Removing container: a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec" id=246b75f6-debf-47d3-9e49-9c2fffed857b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.733546 2125 status_manager.go:677] "Pod was deleted and then recreated, skipping status update" pod="openshift-monitoring/prometheus-k8s-0" oldPodUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 podUID=bd18d6a8-c90e-4490-ae83-b1e09a3d7bf2 Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.740548609Z" level=info msg="Removed container a74ef9ae477239bf005007e9df5cd3c09410b16013de777146bb48eb8458d2ec: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-metrics" id=246b75f6-debf-47d3-9e49-9c2fffed857b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.740734 2125 scope.go:115] "RemoveContainer" containerID="ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.741408746Z" level=info msg="Removing container: ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833" id=ca90ac4a-9854-4e32-a631-796dba927a10 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.755535457Z" level=info msg="Removed container ad4569ece3c8dbfbd816f5aae7fb4f4f3ee974e02cab9086fd8e502453513833: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy-rules" id=ca90ac4a-9854-4e32-a631-796dba927a10 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.755680 2125 scope.go:115] "RemoveContainer" containerID="6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.756305206Z" level=info msg="Removing container: 6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79" id=3e79df9f-f5f2-4210-ae54-b9969334c6a8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.769799064Z" level=info msg="Removed container 6ec22d9e3a7948b71a8662137640422227165ab747ce1a332f058d44afb09b79: openshift-monitoring/thanos-querier-8654d9f96d-892l6/prom-label-proxy" id=3e79df9f-f5f2-4210-ae54-b9969334c6a8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.769971 2125 scope.go:115] "RemoveContainer" containerID="d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.770545359Z" level=info msg="Removing container: d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064" id=57127713-ceb7-409c-a014-bfe82d47df03 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.785127159Z" level=info msg="Removed container d461b2a5bdb94e8630b89b525f3f963207c52b64660501567576486b96284064: openshift-monitoring/thanos-querier-8654d9f96d-892l6/kube-rbac-proxy" id=57127713-ceb7-409c-a014-bfe82d47df03 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.785360 2125 scope.go:115] "RemoveContainer" containerID="bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.786017413Z" level=info msg="Removing container: bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5" id=70999e68-39b2-4c63-9bc5-8a8e2c377420 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.799511981Z" level=info msg="Removed container bbf1306bcd9ff05a37236005b802aa46a8396168bcd305011cac3c3232d499a5: openshift-monitoring/thanos-querier-8654d9f96d-892l6/oauth-proxy" id=70999e68-39b2-4c63-9bc5-8a8e2c377420 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.799694 2125 scope.go:115] "RemoveContainer" containerID="9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.800313969Z" level=info msg="Removing container: 9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8" id=f36244f9-6a67-4e7b-8509-b041f4bac5a2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.814186883Z" level=info msg="Removed container 9a1b60140efe12b393780f2fd66235c0cf6be125b52b60b9605fbccb504832c8: openshift-monitoring/thanos-querier-8654d9f96d-892l6/thanos-query" id=f36244f9-6a67-4e7b-8509-b041f4bac5a2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.814366 2125 scope.go:115] "RemoveContainer" containerID="94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.814929437Z" level=info msg="Removing container: 94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e" id=84441721-6045-4dc0-80f8-4d0c50a3ec99 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.828426674Z" level=info msg="Removed container 94bdf33dfcce86819453e831fc906daae77b4db41db2f8adcdcf6512b546c23e: openshift-monitoring/telemeter-client-675d948766-44b26/kube-rbac-proxy" id=84441721-6045-4dc0-80f8-4d0c50a3ec99 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.828563 2125 scope.go:115] "RemoveContainer" containerID="29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.829146676Z" level=info msg="Removing container: 29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152" id=58a88f00-b1e5-4af3-80eb-7b8183ab24b8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.842948443Z" level=info msg="Removed container 29314d4a1c76968c319f38a1e8ff6536eb151d6c7d7ebcba484834db4b4f1152: openshift-monitoring/telemeter-client-675d948766-44b26/reload" id=58a88f00-b1e5-4af3-80eb-7b8183ab24b8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:07.843077 2125 scope.go:115] "RemoveContainer" containerID="b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da" Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.843595529Z" level=info msg="Removing container: b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da" id=dbceca09-70a1-4d64-a895-243992ce6430 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volume\x2dsubpaths-web\x2dconfig-prometheus-5.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volume\x2dsubpaths-web\x2dconfig-prometheus-5.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4779e361c5ed1cf598cba8637015cfc7b53fe9b10aa4be9fb0e4c975ea7fecdc-merged.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4779e361c5ed1cf598cba8637015cfc7b53fe9b10aa4be9fb0e4c975ea7fecdc-merged.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-netns-2c559b52\x2d3d31\x2d49d8\x2d80de\x2d874e670a2653.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-netns-2c559b52\x2d3d31\x2d49d8\x2d80de\x2d874e670a2653.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-ipcns-2c559b52\x2d3d31\x2d49d8\x2d80de\x2d874e670a2653.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-ipcns-2c559b52\x2d3d31\x2d49d8\x2d80de\x2d874e670a2653.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-utsns-2c559b52\x2d3d31\x2d49d8\x2d80de\x2d874e670a2653.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-utsns-2c559b52\x2d3d31\x2d49d8\x2d80de\x2d874e670a2653.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea-userdata-shm.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea-userdata-shm.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtqf6.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:07.856376113Z" level=info msg="Removed container b281e29e0f86a35608fd84582645011098d8b627136b99702b86457377d0f7da: openshift-monitoring/telemeter-client-675d948766-44b26/telemeter-client" id=dbceca09-70a1-4d64-a895-243992ce6430 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtqf6.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dproxy.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dproxy.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dmetrics\x2dclient\x2dcerts.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dmetrics\x2dclient\x2dcerts.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dthanos\x2dsidecar\x2dtls.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dthanos\x2dsidecar\x2dtls.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2detcd\x2dclient\x2dcerts.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2detcd\x2dclient\x2dcerts.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-config.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-config.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dtls.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dtls.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-44953449\x2de4f6\x2d497a\x2db6bf\x2d73fbdc9381b7-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2drbac\x2dproxy.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-netns-aab5b03a\x2d6d12\x2d49fb\x2d9628\x2d2d412137c7fb.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-netns-aab5b03a\x2d6d12\x2d49fb\x2d9628\x2d2d412137c7fb.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1-userdata-shm.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1-userdata-shm.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddzg2g.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddzg2g.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy\x2drules.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy\x2drules.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2doauth\x2dcookie.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2doauth\x2dcookie.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy\x2dmetrics.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy\x2dmetrics.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dtls.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b64af5e5\x2de41c\x2d4886\x2da88b\x2d39556a3f4b21-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dtls.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-38f8ec67\x2dc68b\x2d4783\x2d9d06\x2d95eb33506398-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mxtt.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-38f8ec67\x2dc68b\x2d4783\x2d9d06\x2d95eb33506398-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mxtt.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-38f8ec67\x2dc68b\x2d4783\x2d9d06\x2d95eb33506398-volumes-kubernetes.io\x7esecret-telemeter\x2dclient\x2dtls.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-38f8ec67\x2dc68b\x2d4783\x2d9d06\x2d95eb33506398-volumes-kubernetes.io\x7esecret-telemeter\x2dclient\x2dtls.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-38f8ec67\x2dc68b\x2d4783\x2d9d06\x2d95eb33506398-volumes-kubernetes.io\x7esecret-secret\x2dtelemeter\x2dclient.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-38f8ec67\x2dc68b\x2d4783\x2d9d06\x2d95eb33506398-volumes-kubernetes.io\x7esecret-secret\x2dtelemeter\x2dclient.mount: Consumed 0 CPU time Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-38f8ec67\x2dc68b\x2d4783\x2d9d06\x2d95eb33506398-volumes-kubernetes.io\x7esecret-secret\x2dtelemeter\x2dclient\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Succeeded. Feb 23 16:30:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-38f8ec67\x2dc68b\x2d4783\x2d9d06\x2d95eb33506398-volumes-kubernetes.io\x7esecret-secret\x2dtelemeter\x2dclient\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Consumed 0 CPU time Feb 23 16:30:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:08.395579928Z" level=info msg="Stopping pod sandbox: 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1" id=af67ee16-7fbc-4104-be64-5aaefc5002a1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:08.395631244Z" level=info msg="Stopped pod sandbox (already stopped): 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1" id=af67ee16-7fbc-4104-be64-5aaefc5002a1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:08.395583720Z" level=info msg="Stopping pod sandbox: 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6" id=7e1c5341-623a-4f53-8738-920094b3d422 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:08.395582988Z" level=info msg="Stopping pod sandbox: 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea" id=4299da92-4275-4990-af7a-bc8b7e2a8730 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:08.395724939Z" level=info msg="Stopped pod sandbox (already stopped): 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea" id=4299da92-4275-4990-af7a-bc8b7e2a8730 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:08 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:08.395699546Z" level=info msg="Stopped pod sandbox (already stopped): 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6" id=7e1c5341-623a-4f53-8738-920094b3d422 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:08.396635 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=27c4fe09-e4f7-452d-9364-2daec20710ff path="/var/lib/kubelet/pods/27c4fe09-e4f7-452d-9364-2daec20710ff/volumes" Feb 23 16:30:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:08.396967 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=38f8ec67-c68b-4783-9d06-95eb33506398 path="/var/lib/kubelet/pods/38f8ec67-c68b-4783-9d06-95eb33506398/volumes" Feb 23 16:30:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:08.397422 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=44953449-e4f6-497a-b6bf-73fbdc9381b7 path="/var/lib/kubelet/pods/44953449-e4f6-497a-b6bf-73fbdc9381b7/volumes" Feb 23 16:30:08 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:08.397959 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b64af5e5-e41c-4886-a88b-39556a3f4b21 path="/var/lib/kubelet/pods/b64af5e5-e41c-4886-a88b-39556a3f4b21/volumes" Feb 23 16:30:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00323|connmgr|INFO|br-ex<->unix#1236: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:30:10 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00324|connmgr|INFO|br-int<->unix#2: 226 flow_mods in the 8 s starting 10 s ago (50 adds, 176 deletes) Feb 23 16:30:13 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:13.531522 2125 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeNotSchedulable" Feb 23 16:30:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00325|connmgr|INFO|br-ex<->unix#1245: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:30:30 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:30.170342 2125 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" podUID=cb82d201-6c85-46b2-9687-01dcb20bf97b containerName="registry" containerID="cri-o://acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea" gracePeriod=55 Feb 23 16:30:30 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:30.170582212Z" level=info msg="Stopping container: acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea (timeout: 55s)" id=72a22f43-19ef-4f15-ac25-0ef1c504c7af name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: crio-acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea.scope: Succeeded. Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: crio-acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea.scope: Consumed 3.497s CPU time Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: crio-conmon-acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea.scope: Succeeded. Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: crio-conmon-acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea.scope: Consumed 69ms CPU time Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9425df1660e641c250b854112981d8f55181efc9da32c85c12c2e2b572e2a620-merged.mount: Succeeded. Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9425df1660e641c250b854112981d8f55181efc9da32c85c12c2e2b572e2a620-merged.mount: Consumed 0 CPU time Feb 23 16:30:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:31.349944115Z" level=info msg="Stopped container acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea: openshift-image-registry/image-registry-5f79c9c848-f9wqq/registry" id=72a22f43-19ef-4f15-ac25-0ef1c504c7af name=/runtime.v1.RuntimeService/StopContainer Feb 23 16:30:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:31.350300809Z" level=info msg="Stopping pod sandbox: 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c" id=02eec4c3-a04a-4e72-9aa3-dec9546c08ce name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:31.350462873Z" level=info msg="Got pod network &{Name:image-registry-5f79c9c848-f9wqq Namespace:openshift-image-registry ID:95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c UID:cb82d201-6c85-46b2-9687-01dcb20bf97b NetNS:/var/run/netns/9c3b2d60-aa85-41e3-819c-a009d3296b0e Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:30:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:31.350566852Z" level=info msg="Deleting pod openshift-image-registry_image-registry-5f79c9c848-f9wqq from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.475316 2125 patch_prober.go:29] interesting pod/image-registry-5f79c9c848-f9wqq container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.129.2.7:5000/healthz\": dial tcp 10.129.2.7:5000: connect: connection refused" start-of-body= Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.475367 2125 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" podUID=cb82d201-6c85-46b2-9687-01dcb20bf97b containerName="registry" probeResult=failure output="Get \"https://10.129.2.7:5000/healthz\": dial tcp 10.129.2.7:5000: connect: connection refused" Feb 23 16:30:31 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00326|bridge|INFO|bridge br-int: deleted interface 95bfaaf9735ddc3 on port 13 Feb 23 16:30:31 ip-10-0-136-68 kernel: device 95bfaaf9735ddc3 left promiscuous mode Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.664380 2125 generic.go:296] "Generic (PLEG): container finished" podID=cb82d201-6c85-46b2-9687-01dcb20bf97b containerID="acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea" exitCode=0 Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.664414 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" event=&{ID:cb82d201-6c85-46b2-9687-01dcb20bf97b Type:ContainerDied Data:acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea} Feb 23 16:30:31 ip-10-0-136-68 crio[2086]: 2023-02-23T16:30:31Z [verbose] Del: openshift-image-registry:image-registry-5f79c9c848-f9wqq:cb82d201-6c85-46b2-9687-01dcb20bf97b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:30:31 ip-10-0-136-68 crio[2086]: I0223 16:30:31.470177 63536 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e1f3572f5fa2fdda00131095498877875e8b5626087ed1ff07f1e05032e3cd8f-merged.mount: Succeeded. Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e1f3572f5fa2fdda00131095498877875e8b5626087ed1ff07f1e05032e3cd8f-merged.mount: Consumed 0 CPU time Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: run-utsns-9c3b2d60\x2daa85\x2d41e3\x2d819c\x2da009d3296b0e.mount: Succeeded. Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: run-utsns-9c3b2d60\x2daa85\x2d41e3\x2d819c\x2da009d3296b0e.mount: Consumed 0 CPU time Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: run-ipcns-9c3b2d60\x2daa85\x2d41e3\x2d819c\x2da009d3296b0e.mount: Succeeded. Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: run-ipcns-9c3b2d60\x2daa85\x2d41e3\x2d819c\x2da009d3296b0e.mount: Consumed 0 CPU time Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: run-netns-9c3b2d60\x2daa85\x2d41e3\x2d819c\x2da009d3296b0e.mount: Succeeded. Feb 23 16:30:31 ip-10-0-136-68 systemd[1]: run-netns-9c3b2d60\x2daa85\x2d41e3\x2d819c\x2da009d3296b0e.mount: Consumed 0 CPU time Feb 23 16:30:31 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:31.723400106Z" level=info msg="Stopped pod sandbox: 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c" id=02eec4c3-a04a-4e72-9aa3-dec9546c08ce name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826457 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-certificates\") pod \"cb82d201-6c85-46b2-9687-01dcb20bf97b\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826492 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-bound-sa-token\") pod \"cb82d201-6c85-46b2-9687-01dcb20bf97b\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826515 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-installation-pull-secrets\") pod \"cb82d201-6c85-46b2-9687-01dcb20bf97b\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826544 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-image-registry-private-configuration\") pod \"cb82d201-6c85-46b2-9687-01dcb20bf97b\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826568 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-trusted-ca\") pod \"cb82d201-6c85-46b2-9687-01dcb20bf97b\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826594 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-tls\") pod \"cb82d201-6c85-46b2-9687-01dcb20bf97b\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826617 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x46ll\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-kube-api-access-x46ll\") pod \"cb82d201-6c85-46b2-9687-01dcb20bf97b\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826643 2125 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb82d201-6c85-46b2-9687-01dcb20bf97b-ca-trust-extracted\") pod \"cb82d201-6c85-46b2-9687-01dcb20bf97b\" (UID: \"cb82d201-6c85-46b2-9687-01dcb20bf97b\") " Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:31.826694 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cb82d201-6c85-46b2-9687-01dcb20bf97b/volumes/kubernetes.io~configmap/registry-certificates: clearQuota called, but quotas disabled Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:31.826771 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cb82d201-6c85-46b2-9687-01dcb20bf97b/volumes/kubernetes.io~empty-dir/ca-trust-extracted: clearQuota called, but quotas disabled Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.826879 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cb82d201-6c85-46b2-9687-01dcb20bf97b" (UID: "cb82d201-6c85-46b2-9687-01dcb20bf97b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: W0223 16:30:31.826947 2125 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cb82d201-6c85-46b2-9687-01dcb20bf97b/volumes/kubernetes.io~configmap/trusted-ca: clearQuota called, but quotas disabled Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.827151 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cb82d201-6c85-46b2-9687-01dcb20bf97b" (UID: "cb82d201-6c85-46b2-9687-01dcb20bf97b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.827230 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb82d201-6c85-46b2-9687-01dcb20bf97b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cb82d201-6c85-46b2-9687-01dcb20bf97b" (UID: "cb82d201-6c85-46b2-9687-01dcb20bf97b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.832654 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-kube-api-access-x46ll" (OuterVolumeSpecName: "kube-api-access-x46ll") pod "cb82d201-6c85-46b2-9687-01dcb20bf97b" (UID: "cb82d201-6c85-46b2-9687-01dcb20bf97b"). InnerVolumeSpecName "kube-api-access-x46ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.832669 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cb82d201-6c85-46b2-9687-01dcb20bf97b" (UID: "cb82d201-6c85-46b2-9687-01dcb20bf97b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.832728 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "cb82d201-6c85-46b2-9687-01dcb20bf97b" (UID: "cb82d201-6c85-46b2-9687-01dcb20bf97b"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.833605 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cb82d201-6c85-46b2-9687-01dcb20bf97b" (UID: "cb82d201-6c85-46b2-9687-01dcb20bf97b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.838553 2125 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cb82d201-6c85-46b2-9687-01dcb20bf97b" (UID: "cb82d201-6c85-46b2-9687-01dcb20bf97b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.927046 2125 reconciler.go:399] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-bound-sa-token\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.927071 2125 reconciler.go:399] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-installation-pull-secrets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.927082 2125 reconciler.go:399] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/cb82d201-6c85-46b2-9687-01dcb20bf97b-image-registry-private-configuration\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.927090 2125 reconciler.go:399] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-trusted-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.927099 2125 reconciler.go:399] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.927109 2125 reconciler.go:399] "Volume detached for volume \"kube-api-access-x46ll\" (UniqueName: \"kubernetes.io/projected/cb82d201-6c85-46b2-9687-01dcb20bf97b-kube-api-access-x46ll\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.927120 2125 reconciler.go:399] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb82d201-6c85-46b2-9687-01dcb20bf97b-ca-trust-extracted\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:31 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:31.927131 2125 reconciler.go:399] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb82d201-6c85-46b2-9687-01dcb20bf97b-registry-certificates\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c-userdata-shm.mount: Succeeded. Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c-userdata-shm.mount: Consumed 0 CPU time Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7eprojected-bound\x2dsa\x2dtoken.mount: Succeeded. Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7eprojected-bound\x2dsa\x2dtoken.mount: Consumed 0 CPU time Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx46ll.mount: Succeeded. Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx46ll.mount: Consumed 0 CPU time Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7esecret-image\x2dregistry\x2dprivate\x2dconfiguration.mount: Succeeded. Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7esecret-image\x2dregistry\x2dprivate\x2dconfiguration.mount: Consumed 0 CPU time Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7esecret-installation\x2dpull\x2dsecrets.mount: Succeeded. Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7esecret-installation\x2dpull\x2dsecrets.mount: Consumed 0 CPU time Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7eprojected-registry\x2dtls.mount: Succeeded. Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cb82d201\x2d6c85\x2d46b2\x2d9687\x2d01dcb20bf97b-volumes-kubernetes.io\x7eprojected-registry\x2dtls.mount: Consumed 0 CPU time Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podcb82d201_6c85_46b2_9687_01dcb20bf97b.slice. Feb 23 16:30:32 ip-10-0-136-68 systemd[1]: kubepods-burstable-podcb82d201_6c85_46b2_9687_01dcb20bf97b.slice: Consumed 3.566s CPU time Feb 23 16:30:32 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:32.666818 2125 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5f79c9c848-f9wqq" event=&{ID:cb82d201-6c85-46b2-9687-01dcb20bf97b Type:ContainerDied Data:95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c} Feb 23 16:30:32 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:32.666855 2125 scope.go:115] "RemoveContainer" containerID="acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea" Feb 23 16:30:32 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:32.668023577Z" level=info msg="Removing container: acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea" id=f719c855-ac4e-4991-8be4-c9de39ed5fd3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:32 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:32.689246 2125 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-f9wqq] Feb 23 16:30:32 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:32.693590 2125 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-f9wqq] Feb 23 16:30:32 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:32.704041492Z" level=info msg="Removed container acc9bfa13f55b1fc1f8a861016673fb26097df2dec005d1acfd1ff58bbb015ea: openshift-image-registry/image-registry-5f79c9c848-f9wqq/registry" id=f719c855-ac4e-4991-8be4-c9de39ed5fd3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:30:34 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:34.397009 2125 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cb82d201-6c85-46b2-9687-01dcb20bf97b path="/var/lib/kubelet/pods/cb82d201-6c85-46b2-9687-01dcb20bf97b/volumes" Feb 23 16:30:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00327|connmgr|INFO|br-ex<->unix#1249: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:30:41 ip-10-0-136-68 root[63687]: machine-config-daemon[2269]: drain complete Feb 23 16:30:41 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:30:41.782641 2125 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent inotify watch for: /etc/kubernetes/kubelet-ca.crt" Feb 23 16:30:41 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 16:30:41 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 16:30:41 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 16:30:42 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 16:30:42 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 16:30:42 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 16:30:42 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 16:30:42 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 16:30:42 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 16:30:42 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.446837397Z" level=info msg="Stopping pod sandbox: 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1" id=2b6e0065-cdc3-4bee-b9e1-66b8c99e33ea name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.446879300Z" level=info msg="Stopped pod sandbox (already stopped): 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1" id=2b6e0065-cdc3-4bee-b9e1-66b8c99e33ea name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.447393982Z" level=info msg="Removing pod sandbox: 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1" id=9e1f191d-a2d5-434b-9b30-dfd2179eb50b name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.459450916Z" level=info msg="Removed pod sandbox: 8a6086c30905e9778328b4943c655c0e9e8c2c54372aff32cd2923ed14af9ed1" id=9e1f191d-a2d5-434b-9b30-dfd2179eb50b name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.462022788Z" level=info msg="Stopping pod sandbox: 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6" id=72ab7ea2-7d94-4bb9-9775-319a0b581245 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.462050644Z" level=info msg="Stopped pod sandbox (already stopped): 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6" id=72ab7ea2-7d94-4bb9-9775-319a0b581245 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.462312493Z" level=info msg="Removing pod sandbox: 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6" id=a930852f-9d3a-47ed-b70a-cef9ccdb60d4 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.473913649Z" level=info msg="Removed pod sandbox: 371d339c2a21dacabc72f32475cd502a6c85f1c6b15537476830da5f7871b8c6" id=a930852f-9d3a-47ed-b70a-cef9ccdb60d4 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.474133322Z" level=info msg="Stopping pod sandbox: 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c" id=864c035d-9227-4626-8259-0132c61ca4b2 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.474167584Z" level=info msg="Stopped pod sandbox (already stopped): 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c" id=864c035d-9227-4626-8259-0132c61ca4b2 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.474361680Z" level=info msg="Removing pod sandbox: 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c" id=fdfc23b7-df14-4ba2-8643-297026b31026 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.480853485Z" level=info msg="Removed pod sandbox: 95bfaaf9735ddc38d1e2fa578051b1d31d557143706d6710a65b8a99d259ac8c" id=fdfc23b7-df14-4ba2-8643-297026b31026 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.481097940Z" level=info msg="Stopping pod sandbox: 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8" id=cede555a-23d1-437a-8698-9d2d56235c7a name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.481130557Z" level=info msg="Stopped pod sandbox (already stopped): 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8" id=cede555a-23d1-437a-8698-9d2d56235c7a name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.481359188Z" level=info msg="Removing pod sandbox: 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8" id=35cd79bd-4e45-486a-bc2c-7bb5123c7810 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.488427089Z" level=info msg="Removed pod sandbox: 9cc61114cb7d29152edce4075f158f65bd86c98e5dd8a06e31ff2b390a41f2c8" id=35cd79bd-4e45-486a-bc2c-7bb5123c7810 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.488671742Z" level=info msg="Stopping pod sandbox: 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea" id=5a073428-348b-4040-a041-cf5f969403bc name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.488701189Z" level=info msg="Stopped pod sandbox (already stopped): 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea" id=5a073428-348b-4040-a041-cf5f969403bc name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.488894643Z" level=info msg="Removing pod sandbox: 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea" id=1e9ac872-758c-4064-9232-1473b2aae8db name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 16:30:42 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:30:42.495457796Z" level=info msg="Removed pod sandbox: 7d9bb22d3d6b32a0be618472ab15ad75f3da5566db899185fe66a17d6fe48fea" id=1e9ac872-758c-4064-9232-1473b2aae8db name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:30:42 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 16:30:42 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 16:30:42 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 16:30:54 ip-10-0-136-68 root[64112]: machine-config-daemon[2269]: Initiating switch from kernel default to realtime Feb 23 16:30:54 ip-10-0-136-68 root[64113]: machine-config-daemon[2269]: Switching to kernelType=realtime, invoking rpm-ostree ["override" "remove" "kernel" "kernel-core" "kernel-modules" "kernel-modules-extra" "--install" "kernel-rt-core" "--install" "kernel-rt-modules" "--install" "kernel-rt-modules-extra" "--install" "kernel-rt-kvm"] Feb 23 16:30:54 ip-10-0-136-68 rpm-ostree[62359]: client(id:machine-config-operator dbus:1.348 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) added; new total=1 Feb 23 16:30:54 ip-10-0-136-68 rpm-ostree[62359]: Locked sysroot Feb 23 16:30:54 ip-10-0-136-68 rpm-ostree[62359]: Initiated txn UpdateDeployment for client(id:machine-config-operator dbus:1.348 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0): /org/projectatomic/rpmostree1/rhcos Feb 23 16:30:54 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 16:30:54 ip-10-0-136-68 rpm-ostree[62359]: Process [pid: 64114 uid: 0 unit: crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope] connected to transaction progress Feb 23 16:30:54 ip-10-0-136-68 rpm-ostree[62359]: Librepo version: 1.14.2 with CURL_GLOBAL_ACK_EINTR support (libcurl/7.61.1 OpenSSL/1.1.1k zlib/1.2.11 brotli/1.0.6 libidn2/2.2.0 libpsl/0.20.2 (+libidn2/2.2.0) libssh/0.9.6/openssl/zlib nghttp2/1.33.0) Feb 23 16:30:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00328|connmgr|INFO|br-ex<->unix#1258: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:30:55 ip-10-0-136-68 rpm-ostree[62359]: Preparing pkg txn; enabled repos: ['coreos-extensions'] solvables: 30 Feb 23 16:30:57 ip-10-0-136-68 rpm-ostree[62359]: Imported 4 pkgs Feb 23 16:30:57 ip-10-0-136-68 rpm-ostree[62359]: Executed %post for kernel-rt-core in 90 ms Feb 23 16:31:01 ip-10-0-136-68 rpm-ostree[62359]: Executed %post for kernel-rt-modules in 3754 ms Feb 23 16:31:05 ip-10-0-136-68 rpm-ostree[62359]: Executed %post for kernel-rt-modules-extra in 3760 ms Feb 23 16:31:09 ip-10-0-136-68 rpm-ostree[62359]: Executed %post for kernel-rt-kvm in 3773 ms Feb 23 16:31:09 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00329|connmgr|INFO|br-ex<->unix#1262: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:31:10 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00330|connmgr|INFO|br-int<->unix#2: 106 flow_mods in the 43 s starting 59 s ago (48 adds, 58 deletes) Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: Executed %posttrans for kernel-rt-core in 3799 ms Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: No files matched %transfiletriggerin(lib) for glibc-common Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: No files matched %transfiletriggerin(lib64) for glibc-common Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: Executed %transfiletriggerin(glibc-common) for lib, lib64, usr/lib, usr/lib64 in 190 ms; 16231 matched files Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: No files matched %transfiletriggerin(usr/lib64/gio/modules) for glib2 Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: No files matched %transfiletriggerin(usr/share/glib-2.0/schemas) for glib2 Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: Executed %transfiletriggerin(systemd-udev) for usr/lib/udev/hwdb.d in 73 ms; 23 matched files Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: Executed %transfiletriggerin(systemd-udev) for usr/lib/udev/rules.d in 80 ms; 76 matched files Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: Executed %transfiletriggerin(info) for usr/share/info in 100 ms; 2 matched files Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: Executed %transfiletriggerin(shared-mime-info) for usr/share/mime in 90 ms; 785 matched files Feb 23 16:31:13 ip-10-0-136-68 rpm-ostree[62359]: sanitycheck(/usr/bin/true) successful Feb 23 16:31:14 ip-10-0-136-68 rpm-ostree[62359]: Regenerating rpmdb for target Feb 23 16:31:21 ip-10-0-136-68 rpm-ostree[64762]: dracut: No '/dev/log' or 'logger' included for syslog logging Feb 23 16:31:21 ip-10-0-136-68 rpm-ostree[64762]: dracut: Executing: /usr/bin/dracut --reproducible -v --add ostree --tmpdir=/tmp/dracut -f /tmp/initramfs.img --no-hostonly --kver 4.18.0-372.43.1.rt7.200.el8_6.x86_64 Feb 23 16:31:21 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'systemd-networkd' will not be installed, because it's in the list to be omitted! Feb 23 16:31:21 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'busybox' will not be installed, because it's in the list to be omitted! Feb 23 16:31:21 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'rngd' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'network-legacy' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'dmraid' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'lvm' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'fcoe' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'fcoe-uefi' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'iscsi' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'nbd' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'nfs' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'biosdevname' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: dracut module 'memstrack' will not be installed, because it's in the list to be omitted! Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[66274]: mknod: /tmp/dracut/dracut.E24gpV/initramfs/dev/null: Operation not permitted Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[66275]: mknod: /tmp/dracut/dracut.E24gpV/initramfs/dev/kmsg: Operation not permitted Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[66276]: mknod: /tmp/dracut/dracut.E24gpV/initramfs/dev/console: Operation not permitted Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[66277]: mknod: /tmp/dracut/dracut.E24gpV/initramfs/dev/random: Operation not permitted Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[66278]: mknod: /tmp/dracut/dracut.E24gpV/initramfs/dev/urandom: Operation not permitted Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: bash *** Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: systemd *** Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: fips *** Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[66496]: mknod: /tmp/dracut/dracut.E24gpV/initramfs/dev/random: Operation not permitted Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: Cannot create /dev/random Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: To create an initramfs with fips support, dracut has to run as root Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: scsi-rules *** Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: systemd-initrd *** Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: modsign *** Feb 23 16:31:22 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rdma *** Feb 23 16:31:23 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: coreos-sysctl *** Feb 23 16:31:23 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: i18n *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: ignition-godebug *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: azure-udev-rules *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rhcos-azure-udev *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rhcos-need-network-manager *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: afterburn *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: ignition *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rhcos-afterburn-checkin *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: coreos-ignition *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: coreos-live *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: coreos-multipath *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: coreos-network *** Feb 23 16:31:24 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00331|connmgr|INFO|br-ex<->unix#1271: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: network-manager *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: coreos-multipath-fix *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[67211]: mkdir: cannot create directory '/usr/lib/systemd/system/multipathd.service.d': Read-only file system Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: ignition-conf *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: ignition-ostree *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: network *** Feb 23 16:31:24 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rhcos-fde *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rhcos-fips *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rhcos-check-luks-syntax *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: ifcfg *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: url-lib *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: /usr/lib/dracut/modules.d/45url-lib/module-setup.sh: line 33: warning: command substitution: ignored null byte in input Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: /usr/lib/dracut/modules.d/45url-lib/module-setup.sh: line 33: warning: command substitution: ignored null byte in input Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: /usr/lib/dracut/modules.d/45url-lib/module-setup.sh: line 33: warning: command substitution: ignored null byte in input Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: /usr/lib/dracut/modules.d/45url-lib/module-setup.sh: line 33: warning: command substitution: ignored null byte in input Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: coreos-kernel *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rdcore *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rhcos-mke2fs *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rhcos-tuned-bits *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: clevis *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: clevis-pin-null *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: clevis-pin-sss *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: clevis-pin-tang *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: clevis-pin-tpm2 *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: coreos-agetty-workaround *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: crypt *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: dm *** Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: Skipping udev rule: 64-device-mapper.rules Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: Skipping udev rule: 60-persistent-storage-dm.rules Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: Skipping udev rule: 55-dm.rules Feb 23 16:31:25 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: kernel-modules *** Feb 23 16:31:28 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: kernel-modules-extra *** Feb 23 16:31:28 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: kernel-network-modules *** Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: mdraid *** Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: Skipping udev rule: 64-md-raid.rules Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: multipath *** Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: Skipping udev rule: 40-multipath.rules Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: Skipping udev rule: 56-multipath.rules Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: qemu *** Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: qemu-net *** Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: cifs *** Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: lunmask *** Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: resume *** Feb 23 16:31:29 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: rootfs-block *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: terminfo *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: udev-rules *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: Skipping udev rule: 91-permissions.rules Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: Skipping udev rule: 80-drivers-modprobe.rules Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: dracut-systemd *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: ostree *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: usrmount *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: base *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: emergency-shell-setup *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: fs-lib *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: journal-conf *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: microcode_ctl-fw_dir_override *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl module: mangling fw_dir Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware" Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69177]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: intel: caveats check for kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69197]: intel-06-2d-07: model 'GenuineIntel 06-2d-07', path ' intel-ucode/06-2d-07', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69202]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69202]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69202]: Dependency check for required intel succeeded: result=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: intel-06-2d-07: caveats check for kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07" to fw_dir variable Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69222]: intel-06-4e-03: model 'GenuineIntel 06-4e-03', path ' intel-ucode/06-4e-03', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69227]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69227]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69227]: Dependency check for required intel succeeded: result=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69222]: intel-06-4e-03: caveat is disabled in configuration Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" failed early load check for "intel-06-4e-03", skipping Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69244]: intel-06-4f-01: model 'GenuineIntel 06-4f-01', path ' intel-ucode/06-4f-01', kvers ' 4.17.0 3.10.0-894 3.10.0-862.6.1 3.10.0-693.35.1 3.10.0-514.52.1 3.10.0-327.70.1 2.6.32-754.1.1 2.6.32-573.58.1 2.6.32-504.71.1 2.6.32-431.90.1 2.6.32-358.90.1' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69249]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69249]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69249]: Dependency check for required intel succeeded: result=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69244]: intel-06-4f-01: caveat is disabled in configuration Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" failed early load check for "intel-06-4f-01", skipping Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69266]: intel-06-55-04: model 'GenuineIntel 06-55-04', path ' intel-ucode/06-55-04', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69271]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69271]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69271]: Dependency check for required intel succeeded: result=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: intel-06-55-04: caveats check for kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04" to fw_dir variable Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69291]: intel-06-5e-03: model 'GenuineIntel 06-5e-03', path ' intel-ucode/06-5e-03', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69296]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69296]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69296]: Dependency check for required intel succeeded: result=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: intel-06-5e-03: caveats check for kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03" to fw_dir variable Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69316]: intel-06-8c-01: model 'GenuineIntel 06-8c-01', path ' intel-ucode/06-8c-01', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69321]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69321]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69321]: Dependency check for required intel succeeded: result=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: intel-06-8c-01: caveats check for kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01" to fw_dir variable Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69341]: intel-06-8e-9e-0x-0xca: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69346]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69346]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69346]: Dependency check for required intel succeeded: result=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69341]: intel-06-8e-9e-0x-0xca: caveat is disabled in configuration Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" failed early load check for "intel-06-8e-9e-0x-0xca", skipping Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69363]: intel-06-8e-9e-0x-dell: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69368]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69368]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[69368]: Dependency check for required intel succeeded: result=0 Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: intel-06-8e-9e-0x-dell: caveats check for kernel version "4.18.0-372.43.1.rt7.200.el8_6.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell" to fw_dir variable Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell /usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07 /usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates /lib/firmware" Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including module: shutdown *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Including modules done *** Feb 23 16:31:30 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Installing kernel module dependencies *** Feb 23 16:31:31 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Installing kernel module dependencies done *** Feb 23 16:31:31 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Resolving executable dependencies *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Resolving executable dependencies done*** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Hardlinking files *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Hardlinking files done *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Generating early-microcode cpio image *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Constructing AuthenticAMD.bin *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Store current command line parameters *** Feb 23 16:31:33 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Creating image file '/tmp/initramfs.img' *** Feb 23 16:31:39 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00332|connmgr|INFO|br-ex<->unix#1275: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:31:53 ip-10-0-136-68 rpm-ostree[64762]: dracut: *** Creating initramfs image file '/tmp/initramfs.img' done *** Feb 23 16:31:54 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00333|connmgr|INFO|br-ex<->unix#1284: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:31:55 ip-10-0-136-68 rpm-ostree[62359]: Wrote commit: 6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea; New objects: meta:57 content:30 totaling 160.1 MB) Feb 23 16:31:56 ip-10-0-136-68 rpm-ostree[62359]: note: Deploying commit 6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea which contains content in /var/lib that will be ignored. Feb 23 16:31:56 ip-10-0-136-68 systemd[1]: Started OSTree Finalize Staged Deployment. Feb 23 16:31:56 ip-10-0-136-68 rpm-ostree[62359]: Created new deployment /ostree/deploy/rhcos/deploy/6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea.0 Feb 23 16:31:56 ip-10-0-136-68 rpm-ostree[62359]: Pruned container image layers: 0 Feb 23 16:31:57 ip-10-0-136-68 rpm-ostree[62359]: Txn UpdateDeployment on /org/projectatomic/rpmostree1/rhcos successful Feb 23 16:31:57 ip-10-0-136-68 rpm-ostree[62359]: Unlocked sysroot Feb 23 16:31:57 ip-10-0-136-68 rpm-ostree[62359]: Process [pid: 64114 uid: 0 unit: crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope] disconnected from transaction progress Feb 23 16:31:57 ip-10-0-136-68 rpm-ostree[62359]: client(id:machine-config-operator dbus:1.348 unit:crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope uid:0) vanished; remaining=0 Feb 23 16:31:57 ip-10-0-136-68 rpm-ostree[62359]: In idle state; will auto-exit in 60 seconds Feb 23 16:31:57 ip-10-0-136-68 logger[78412]: rendered-worker-897f2f3c67d20d57713bd47f68251b36 Feb 23 16:31:57 ip-10-0-136-68 root[78413]: machine-config-daemon[2269]: Rebooting node Feb 23 16:31:57 ip-10-0-136-68 root[78414]: machine-config-daemon[2269]: initiating reboot: Node will reboot into config rendered-worker-897f2f3c67d20d57713bd47f68251b36 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Started machine-config-daemon: Node will reboot into config rendered-worker-897f2f3c67d20d57713bd47f68251b36. Feb 23 16:31:57 ip-10-0-136-68 systemd-logind[1051]: System is rebooting. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: machine-config-daemon-reboot.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped machine-config-daemon: Node will reboot into config rendered-worker-897f2f3c67d20d57713bd47f68251b36. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: machine-config-daemon-reboot.service: Consumed 7ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping Restore /run/initramfs on shutdown... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: coreos-update-ca-trust.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Run update-ca-trust. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: coreos-update-ca-trust.service: Consumed 0 CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Removed slice system-sshd\x2dkeygen.slice. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: system-sshd\x2dkeygen.slice: Consumed 0 CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target Graphical Interface. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping Authorization Manager... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654. Feb 23 16:31:57 ip-10-0-136-68 conmon[2622]: conmon b509d8436ded4c4e37d5 : container 2635 exited with status 143 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d. Feb 23 16:31:57 ip-10-0-136-68 conmon[2622]: conmon b509d8436ded4c4e37d5 : stdio_input read failed Resource temporarily unavailable Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping rpm-ostree System Management Daemon... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: afterburn.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Afterburn (Metadata). Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: afterburn.service: Consumed 0 CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target RPC Port Mapper. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3. Feb 23 16:31:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:31:57.503629 2125 plugin_watcher.go:215] "Removing socket path from desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b. Feb 23 16:31:57 ip-10-0-136-68 conmon[2518]: conmon 2b26cbfc007385f29997 : container 2542 exited with status 143 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d. Feb 23 16:31:57 ip-10-0-136-68 kubenswrapper[2125]: I0223 16:31:57.520672 2125 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 16:31:57 ip-10-0-136-68 conmon[2518]: conmon 2b26cbfc007385f29997 : stdio_input read failed Resource temporarily unavailable Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3. Feb 23 16:31:57 ip-10-0-136-68 conmon[2518]: conmon 2b26cbfc007385f29997 : stdio_input read failed Resource temporarily unavailable Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping NFS status monitor for NFSv2/3 locking.... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target Multi-User System. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping Kubernetes Kubelet... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping irqbalance daemon... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target Synchronize afterburn-sshkeys@.service template instances. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: console-login-helper-messages-gensnippet-ssh-keys.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Generate SSH keys snippet for display via console-login-helper-messages. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: console-login-helper-messages-gensnippet-ssh-keys.service: Consumed 0 CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target Remote Encrypted Volumes. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target Login Prompts. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping Serial Getty on ttyS0... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping Getty on tty1... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af. Feb 23 16:31:57 ip-10-0-136-68 conmon[4629]: conmon 2c9afadfaefb33bc9960 : container 4642 exited with status 143 Feb 23 16:31:57 ip-10-0-136-68 conmon[5356]: conmon e7959419652f5b24a840 : container 5368 exited with status 143 Feb 23 16:31:57 ip-10-0-136-68 conmon[3778]: conmon 9cacd2022bb32e917920 : container 3791 exited with status 2 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping NTP client/server... Feb 23 16:31:57 ip-10-0-136-68 conmon[4719]: conmon 6c416d5c623aca70659a : container 4731 exited with status 2 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target Timers. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: logrotate.timer: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Daily rotation of log files. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-clean.timer: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Daily Cleanup of Temporary Directories. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: unbound-anchor.timer: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped daily update of the root trust anchor for DNSSEC. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping OpenSSH server daemon... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping Login Service... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: lvm2-lvmpolld.socket: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Closed LVM2 poll daemon socket. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: lvm2-lvmpolld.socket: Consumed 0 CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: irqbalance.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped irqbalance daemon. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: irqbalance.service: Consumed 53ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: rpc-statd.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped NFS status monitor for NFSv2/3 locking.. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: rpc-statd.service: Consumed 21ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: polkit.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Authorization Manager. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: polkit.service: Consumed 84ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: dracut-shutdown.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Restore /run/initramfs on shutdown. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: dracut-shutdown.service: Consumed 2ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654.scope: Consumed 19ms CPU time Feb 23 16:31:57 ip-10-0-136-68 sshd[1152]: Received signal 15; terminating. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 chronyd[960]: chronyd exiting Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d. Feb 23 16:31:57 ip-10-0-136-68 conmon[2498]: conmon 7ed25ccda440b1407317 : container 2534 exited with status 143 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d.scope: Consumed 6.840s CPU time Feb 23 16:31:57 ip-10-0-136-68 conmon[2485]: conmon 532060bd7464ded47ac6 : container 2511 exited with status 143 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 conmon[2720]: conmon 0d96ff9d4729d98a0282 : container 2769 exited with status 2 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2.scope: Consumed 15ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4.scope: Consumed 19ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d.scope: Consumed 20.912s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854.scope: Consumed 402ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: sshd.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped OpenSSH server daemon. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: sshd.service: Consumed 10ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2.scope: Consumed 17ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: serial-getty@ttyS0.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Serial Getty on ttyS0. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: serial-getty@ttyS0.service: Consumed 609ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372.scope: Consumed 1.070s CPU time Feb 23 16:31:57 ip-10-0-136-68 conmon[4370]: conmon 01dde28329bb6eb1962b : container 4393 exited with status 2 Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: getty@tty1.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Getty on tty1. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: getty@tty1.service: Consumed 3.860s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: kubelet.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Kubernetes Kubelet. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: kubelet.service: Consumed 2min 3.147s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped rpm-ostree System Management Daemon. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Consumed 1min 26.521s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: chronyd.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped NTP client/server. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: chronyd.service: Consumed 107ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2.scope: Consumed 521ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2.scope: Consumed 16ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854.scope: Consumed 19ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372.scope: Consumed 19ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80.scope: Consumed 221ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9.scope: Consumed 147ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d.scope: Consumed 17ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d.scope: Consumed 290ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3.scope: Consumed 89ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3.scope: Consumed 17ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4.scope: Consumed 19ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d.scope: Consumed 18ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d.scope: Consumed 20ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80.scope: Consumed 20ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593.scope: Consumed 26ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654.scope: Consumed 563ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope: Consumed 27ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9.scope: Consumed 17ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d.scope: Consumed 1.475s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3.scope: Consumed 6.754s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4.scope: Consumed 754ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593.scope: Consumed 621ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d.scope: Consumed 23ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3.scope: Consumed 26.210s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-conmon-434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3.scope: Consumed 37ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Removed slice system-getty.slice. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: system-getty.slice: Consumed 3.860s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping Permit User Sessions... Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Removed slice system-serial\x2dgetty.slice. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: system-serial\x2dgetty.slice: Consumed 609ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target sshd-keygen.target. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target Host and Network Name Lookups. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4.scope: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: crio-2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4.scope: Consumed 2.588s CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: systemd-user-sessions.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Permit User Sessions. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: systemd-user-sessions.service: Consumed 7ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: coreos-ignition-write-issues.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Create Ignition Status Issue Files. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: coreos-ignition-write-issues.service: Consumed 0 CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target Remote File Systems. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: systemd-logind.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped Login Service. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: systemd-logind.service: Consumed 143ms CPU time Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped target User and Group Name Lookups. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopping System Security Services Daemon... Feb 23 16:31:57 ip-10-0-136-68 sssd_nss[1037]: Shutting down (status = 0) Feb 23 16:31:57 ip-10-0-136-68 sssd_be[1025]: Shutting down (status = 0) Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: sssd.service: Succeeded. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: Stopped System Security Services Daemon. Feb 23 16:31:57 ip-10-0-136-68 systemd[1]: sssd.service: Consumed 176ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f.scope: Consumed 363ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f.scope: Consumed 21ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b.scope: Consumed 17ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: Stopped libcontainer container c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b.scope: Consumed 349ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: Stopped libcontainer container bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64.scope: Consumed 318ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64.scope: Consumed 17ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af.scope: Consumed 530ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb.scope: Consumed 968ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af.scope: Consumed 19ms CPU time Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb.scope: Succeeded. Feb 23 16:31:58 ip-10-0-136-68 systemd[1]: crio-conmon-14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb.scope: Consumed 17ms CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: crio-conmon-2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c.scope: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: crio-conmon-2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c.scope: Consumed 19ms CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: crio-2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c.scope: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: crio-2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c.scope: Consumed 3.612s CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopping Container Runtime Interface for OCI (CRI-O)... Feb 23 16:32:17 ip-10-0-136-68 crio[2086]: time="2023-02-23 16:32:17.528638144Z" level=error msg="Failed to update container state for 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c: `/usr/bin/runc --root /run/runc --systemd-cgroup state 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c` failed: : signal: terminated" Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: crio.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Container Runtime Interface for OCI (CRI-O). Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: crio.service: Consumed 1min 49.144s CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: kubelet-auto-node-size.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Dynamically sets the system reserved for the kubelet. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: kubelet-auto-node-size.service: Consumed 0 CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped target Network is Online. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: NetworkManager-wait-online.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Network Manager Wait Online. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: NetworkManager-wait-online.service: Consumed 0 CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped target Network. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopping Network Manager... Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5859] caught SIGTERM, shutting down normally. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: node-valid-hostname.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Wait for a non-localhost hostname. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: node-valid-hostname.service: Consumed 0 CPU time Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5881] device (ens5): releasing ovs interface ens5 Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5884] device (br-ex): state change: activated -> deactivating (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5892] dispatcher: (28) failed: Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5893] device (br-ex): state change: deactivating -> unmanaged (reason 'removed', sys-iface-state: 'managed') Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5903] device (ens5): state change: activated -> deactivating (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5909] dispatcher: (30) failed: Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5909] device (ens5): state change: deactivating -> unmanaged (reason 'removed', sys-iface-state: 'managed') Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5916] device (br-ex): state change: activated -> deactivating (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5922] dispatcher: (32) failed: Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5923] device (br-ex): state change: deactivating -> unmanaged (reason 'removed', sys-iface-state: 'managed') Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5923] device (br-ex): releasing ovs interface br-ex Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00334|bridge|INFO|bridge br-ex: deleted interface br-ex on port 65534 Feb 23 16:32:17 ip-10-0-136-68 kernel: device br-ex left promiscuous mode Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5940] dhcp4 (br-ex): canceled DHCP transaction Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5941] dhcp4 (br-ex): activation: beginning transaction (timeout in 45 seconds) Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.5941] dhcp4 (br-ex): state changed no lease Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.6096] device (br-ex): state change: activated -> deactivating (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.6108] dispatcher: (34) failed: Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.6109] manager: NetworkManager state is now CONNECTED_LOCAL Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.6110] device (br-ex): state change: deactivating -> unmanaged (reason 'removed', sys-iface-state: 'managed') Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.6114] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.hostname1.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1149 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:17 ip-10-0-136-68 dbus-daemon[958]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 16:32:17 ip-10-0-136-68 NetworkManager[1149]: [1677169937.6184] exiting (success) Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: NetworkManager.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Network Manager. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: NetworkManager.service: Consumed 456ms CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopping D-Bus System Message Bus... Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch... Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: openvswitch.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: openvswitch.service: Consumed 744us CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: dbus.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped D-Bus System Message Bus. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: dbus.service: Consumed 868ms CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch Forwarding Unit... Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00335|bridge|INFO|bridge br-ex: deleted interface ens5 on port 1 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00336|bridge|INFO|bridge br-ex: deleted interface patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int on port 2 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00337|ofproto_dpif_rid|ERR|recirc_id 6 left allocated when ofproto (br-ex) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00338|bridge|INFO|bridge br-int: deleted interface patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal on port 6 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00339|bridge|INFO|bridge br-int: deleted interface ovn-5a9c4f-0 on port 2 Feb 23 16:32:17 ip-10-0-136-68 ovs-ctl[78849]: Exiting ovs-vswitchd (1135). Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00340|bridge|INFO|bridge br-int: deleted interface 0c751590d84e3dc on port 9 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00341|bridge|INFO|bridge br-int: deleted interface ovn-72cfee-0 on port 7 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00342|bridge|INFO|bridge br-int: deleted interface ovn-7dfb31-0 on port 1 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00343|bridge|INFO|bridge br-int: deleted interface ovn-k8s-mp0 on port 5 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00344|bridge|INFO|bridge br-int: deleted interface f879576786b0889 on port 8 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00345|bridge|INFO|bridge br-int: deleted interface ovn-b823f7-0 on port 4 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00346|bridge|INFO|bridge br-int: deleted interface ovn-061a07-0 on port 3 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00347|bridge|INFO|bridge br-int: deleted interface 35539c92883319b on port 10 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00348|bridge|INFO|bridge br-int: deleted interface ff0a102645f986a on port 11 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00349|bridge|INFO|bridge br-int: deleted interface br-int on port 65534 Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00350|ofproto_dpif_rid|ERR|recirc_id 92 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00351|ofproto_dpif_rid|ERR|recirc_id 5491 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00352|ofproto_dpif_rid|ERR|recirc_id 5492 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00353|ofproto_dpif_rid|ERR|recirc_id 90 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00354|ofproto_dpif_rid|ERR|recirc_id 319 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00355|ofproto_dpif_rid|ERR|recirc_id 83 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00356|ofproto_dpif_rid|ERR|recirc_id 5490 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00357|ofproto_dpif_rid|ERR|recirc_id 163 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00358|ofproto_dpif_rid|ERR|recirc_id 5489 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00359|ofproto_dpif_rid|ERR|recirc_id 5493 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00360|ofproto_dpif_rid|ERR|recirc_id 5494 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00361|ofproto_dpif_rid|ERR|recirc_id 5474 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00362|ofproto_dpif_rid|ERR|recirc_id 5488 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00363|ofproto_dpif_rid|ERR|recirc_id 5496 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00364|ofproto_dpif_rid|ERR|recirc_id 5389 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00365|ofproto_dpif_rid|ERR|recirc_id 84 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 ovs-vswitchd[1135]: ovs|00366|ofproto_dpif_rid|ERR|recirc_id 5495 left allocated when ofproto (br-int) is destructed Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: ovs-vswitchd.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Forwarding Unit. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: ovs-vswitchd.service: Consumed 18.593s CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: ovs-delete-transient-ports.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Delete Transient Ports. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: ovs-delete-transient-ports.service: Consumed 0 CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch Database Unit... Feb 23 16:32:17 ip-10-0-136-68 ovs-ctl[78869]: Exiting ovsdb-server (1062). Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: ovsdb-server.service: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Database Unit. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: ovsdb-server.service: Consumed 1.444s CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped target Basic System. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopping OSTree Finalize Staged Deployment... Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped target Paths. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.path: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped Monitor console-login-helper-messages runtime issue snippets directory for changes. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped target Sockets. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: dbus.socket: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Closed D-Bus System Message Bus Socket. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: dbus.socket: Consumed 0 CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: bootupd.socket: Succeeded. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Closed bootupd.socket. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: bootupd.socket: Consumed 0 CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped target Slices. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Removed slice User and Session Slice. Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: user.slice: Consumed 0 CPU time Feb 23 16:32:17 ip-10-0-136-68 systemd[1]: Stopped target Network (Pre). Feb 23 16:32:18 ip-10-0-136-68 ostree[78889]: Finalizing staged deployment Feb 23 16:32:18 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 16:32:20 ip-10-0-136-68 ostree[78889]: Copying /etc changes: 14 modified, 0 removed, 200 added Feb 23 16:32:20 ip-10-0-136-68 ostree[78889]: Copying /etc changes: 14 modified, 0 removed, 200 added Feb 23 16:32:22 ip-10-0-136-68 ostree[78889]: Bootloader updated; bootconfig swap: yes; bootversion: boot.0.1, deployment count change: 1 Feb 23 16:32:22 ip-10-0-136-68 ostree[78889]: Bootloader updated; bootconfig swap: yes; bootversion: boot.0.1, deployment count change: 1 Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped OSTree Finalize Staged Deployment. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.service: Consumed 1.676s CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.path: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped OSTree Monitor Staged Deployment. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped target System Initialization. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopping Load/Save Random Seed... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-update-done.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Update is Completed. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-update-done.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-hwdb-update.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Rebuild Hardware Database. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-hwdb-update.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-ask-password-console.path: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped target Local Encrypted Volumes (Pre). Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-ask-password-wall.path: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Forward Password Requests to Wall Directory Watch. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopping Update UTMP about System Boot/Shutdown... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Apply Kernel Variables. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-sysctl.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Load Kernel Modules. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-modules-load.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: coreos-printk-quiet.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped CoreOS: Set printk To Level 4 (warn). Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: coreos-printk-quiet.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: ldconfig.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Rebuild Dynamic Linker Cache. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: ldconfig.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-journal-catalog-update.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Rebuild Journal Catalog. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-journal-catalog-update.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-random-seed.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Load/Save Random Seed. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-random-seed.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-update-utmp.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Update UTMP about System Boot/Shutdown. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-update-utmp.service: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopping Security Auditing Service... Feb 23 16:32:22 ip-10-0-136-68 auditd[903]: The audit daemon is exiting. Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1305 audit(1677169942.449:161): op=set audit_pid=0 old=903 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: auditd.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Security Auditing Service. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: auditd.service: Consumed 32ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1130 audit(1677169942.455:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1131 audit(1677169942.455:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1130 audit(1677169942.458:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1131 audit(1677169942.458:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-journal-flush.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped Flush Journal to Persistent Storage. Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1130 audit(1677169942.460:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1131 audit(1677169942.460:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: systemd-journal-flush.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~secret/ovn-node-metrics-cert... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/6d75c369-887c-42d2-94c1-40cd36f882c3/volumes/kubernetes.io~projected/kube-api-access-xhxvk... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/354c29e9-705c-4f87-93ce-2b33c1ed2903... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: ostree-remount.service: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Stopped OSTree Remount OS/ Bind Mounts. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: ostree-remount.service: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/9e149a125370c0ad754b2613788b83fa6545bf7bf57711f7193a1b3e375af4dc/merged... Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1130 audit(1677169942.473:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:22 ip-10-0-136-68 kernel: audit: type=1131 audit(1677169942.473:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/47731151-d6c2-4983-ad0a-4b809b7855d3... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/f36753b3-0496-4a07-9706-b1775a079ccf... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/01bf971f-641c-4c4c-8b63-110b0780e79c... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/a17440cb-5d23-467a-b4af-09ce6ea96f63... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/b97e7fe5-fe52-4769-bb52-fc233e05c05e/volumes/kubernetes.io~projected/kube-api-access-m29j2... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/3a9bbd2e4a30ee925c19009671172e4b08a443f5a6e78f350c74b69a700a00a2/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7633633f89b599dee7515b689fdc5f26ca3883822a80a39ce0fd2c9e57c57ece/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/47731151-d6c2-4983-ad0a-4b809b7855d3... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7c086642764b421fbd08c314a9efbfbb080e5b836433080e02d319743ea71043/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/4d1130cef66f4edf487bcc90f69a7b09bfcce530c8486128a34bc727b397bf01/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~secret/ovn-cert... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/a92af5d6-48f2-4cbc-ab67-5e7aee609bd3... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting Temporary Directory (/tmp)... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/a92af5d6-48f2-4cbc-ab67-5e7aee609bd3... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~projected/kube-api-access-k9xlt... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/464ecae5-d083-4bf5-84a3-af8c8873c68a... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/6e1ea68f079a62aa982468feafcd844920267b65402c83b7922464973ec223c7/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/f595375d8fa5630fb66c832a0c6dc7df44c862c27aece75fdb79c68a850025f6/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/01bf971f-641c-4c4c-8b63-110b0780e79c... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7b9c6b3d00b3819acdd248deb5ed2bd6f24e93461f0555a15396482c5c86cfe9/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/20a20364b8c3e52de551f4398717962d6c821f770b10b2d6b423c857c94eb3c0/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/23fd74edf1666b7f623c04fd2c507e505570f30a4df330c5f5d71886ee37fc08/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/b2429a3f5b64deede9db8fad30493d0dafb368702a63d66509ff1ff2870e46aa/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/9f568b48-2486-4809-94e3-236af56c4fde... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/a92af5d6-48f2-4cbc-ab67-5e7aee609bd3... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/f36753b3-0496-4a07-9706-b1775a079ccf... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/cea6dd1ecfb23dea48f49ae3dd4c4e250c27e96b23e56a98595b4fbcd96f0b3c/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/a704838c-aeb5-4709-b91c-2460423203a4/volumes/kubernetes.io~projected/kube-api-access-nfmxf... Feb 23 16:32:22 ip-10-0-136-68 kernel: device 0c751590d84e3dc left promiscuous mode Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/5acce570-9f3b-4dab-9fed-169a4c110f8c/volumes/kubernetes.io~projected/kube-api-access-7nhww... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/f43e09c9-4659-423a-8351-05c8907bbf9e... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volumes/kubernetes.io~projected/kube-api-access-796v8... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/f512c993f2ca390627884fc724e385f2d543c13575feb9b66ffac2b8c8a86251/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/5629c7be92597f7c15dc074dd999aaf1a403af48a317a9bd01511a952b00ce3a/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/dfc40c07-9fb1-4de9-81fb-90a7dd4c4c33... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/b391296ed20fae68e693dee18e0bc5cb4fb310384ef52d1b84452ba589739264/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/c072a683-1031-40cb-a1bc-1dac71bca46b/volumes/kubernetes.io~projected/kube-api-access-w2zwz... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/354c29e9-705c-4f87-93ce-2b33c1ed2903... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/ab12c9622b2fba7190f39d7e8e2ae9e65f89f9cd77d3927c14c73c92d8e98666/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/f36753b3-0496-4a07-9706-b1775a079ccf... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/47731151-d6c2-4983-ad0a-4b809b7855d3... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/f95af646667e1037739d96573aaa20c61f1141f9784b3fd3e6a6b2d5ea68cc5c/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting CoreOS Dynamic Mount for /boot... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/ecd261a9-4d88-4e3d-aa47-803a685b6569/volumes/kubernetes.io~projected/kube-api-access-jqpfc... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/01bf971f-641c-4c4c-8b63-110b0780e79c... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/568df1ebed3c04be71e1befd88dfbffef971e4ebb74a8e731bee0b249b95827a/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/2... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/544f7e4f4a14b546895e6092bc24f36ac846b68f9b5431aefd8a515e23ecd225/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/bd432261-d919-463e-9ad8-453be2170666... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/b97e7fe5-fe52-4769-bb52-fc233e05c05e/volumes/kubernetes.io~secret/proxy-tls... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/c072a683-1031-40cb-a1bc-1dac71bca46b/volumes/kubernetes.io~secret/metrics-tls... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/69f64f4253dff47ba9de3dcd1c2aea3269c60d279d12aefdf50fb260cb8c552f/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/dfc40c07-9fb1-4de9-81fb-90a7dd4c4c33... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/a23aeb1ce5c6c8d147796bd254d43813e242520cc87824e11e436c180aac15e4/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/a25a2b70542ad5369992005ff9d6eb8feb0484c470d84115d29291052bea4056/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/9f568b48-2486-4809-94e3-236af56c4fde... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/b2a22bdbbbc90eea0272c2e1c2cf6e0fa74534dd90996c110ae5799f1aed728a/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/5dbbfbda12f17fc2d42b003060269e1dd8760c9b5d9acdad39598a0a44b7bf7b/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/dfc40c07-9fb1-4de9-81fb-90a7dd4c4c33... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/a17440cb-5d23-467a-b4af-09ce6ea96f63... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/9f568b48-2486-4809-94e3-236af56c4fde... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/ffd2cee3-1bae-4941-8015-2b3ade383d85/volumes/kubernetes.io~projected/kube-api-access-v4glw... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes/kubernetes.io~projected/kube-api-access-vdk85... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/f22d712b81d8e6e0eb06eb91d11f5533189fdf2914faf7e9327a5e72babfada6/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/26f9dc1311c3093439b273e7b58e89ee4c668124075662220b2589676b267f8a/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/ce52c6d5ab7bed85dfe418f74795d0c24238cbe314313161f91a5ca58ca9d33c/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /etc... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/249ca568c059c3c31a6547a67cd9545477b86ba84fdd963e017cec47ecd718b8/merged... Feb 23 16:32:22 ip-10-0-136-68 umount[79068]: umount: /etc: target is busy. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/9cd26ba5-46e4-40b5-81e6-74079153d58d/volumes/kubernetes.io~secret/metrics-certs... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7b49c366c292abe2c7177314a0da44de273f2a0120e8e65dc089460e5631e142/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/bd432261-d919-463e-9ad8-453be2170666... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/9cd26ba5-46e4-40b5-81e6-74079153d58d/volumes/kubernetes.io~projected/kube-api-access-2jwlz... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/bd432261-d919-463e-9ad8-453be2170666... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes/kubernetes.io~secret/node-exporter-tls... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/b97e7fe5-fe52-4769-bb52-fc233e05c05e/volumes/kubernetes.io~secret/cookie-secret... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32/volumes/kubernetes.io~projected/kube-api-access-74mgq... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/354c29e9-705c-4f87-93ce-2b33c1ed2903... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/66097094-74f3-4cd1-b8ec-0513bfaa3c62... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/5... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/27a8ac717890439a47abda2f3aa1147a8d9d6c0fb418f6e9649ef21bc6318be4/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/e73587f66d8eed422646b2c62549d9be5763afe225951d91369ae12a6b303e03/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/464ecae5-d083-4bf5-84a3-af8c8873c68a... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/21fbb559e1292c7ac19cbffb6fb1623008f0176fdd3f6f07ef4d80ed3d838ef2/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/f43e09c9-4659-423a-8351-05c8907bbf9e... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/1... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/2c47bc3e-0247-4d47-80e3-c168262e7976/volumes/kubernetes.io~projected/kube-api-access-hr2sj... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/464ecae5-d083-4bf5-84a3-af8c8873c68a... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/d2d950a0f8b9ffde71d1e4cbda427ef1fed6fb468f7077f20ce123e642198c05/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/a3c2cdc1af54823b044da7b3ab7a29fd1485f64080f9468f0d77bd995100b22e/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/87ba02363226f8f3b0bddd670fae380b9ce71e79d85572cf26d126bb9ebb2072/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/f43e09c9-4659-423a-8351-05c8907bbf9e... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/1bc24f02835b1e93a13159fa95d185182e23683e12a18a5e6851c7822ce97bcb/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/452e7667d68427217dcce488a05adc04ca1d27e12ab1d54d7fccf4072c0b05b0/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/b50327aa9703ecc1c3d4814922ae98a68046e74b46d1b3978aa04ac9f0271e57/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/66097094-74f3-4cd1-b8ec-0513bfaa3c62... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/a17440cb-5d23-467a-b4af-09ce6ea96f63... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/68a377c9d9a8dfbc931b66258cde02191ae1bc3efa5f31d8bb11b155341841c8/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/2bb2cb8a00dccaff71de868c0f190080f99a4b0115232714827c8a70c95ddda6/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/693af76d844dbc3dd375d090be3aa7ff7b7e87282c58e3ab90848ef3c83e84de/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/4... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7dcfd4f833fc8daa144e936ac667c1de6ff8e2f8d65c22cc9570d5ff2e426158/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb/userdata/shm... Feb 23 16:32:22 ip-10-0-136-68 kernel: device ff0a102645f986a left promiscuous mode Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/66097094-74f3-4cd1-b8ec-0513bfaa3c62... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/3... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/d9ea34e75000a6ffebd196e553aef957d988ff9371f1654e56f49e17ece421ce/merged... Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7esecret-ovn\x2dnode\x2dmetrics\x2dcert.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~secret/ovn-node-metrics-cert. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7esecret-ovn\x2dnode\x2dmetrics\x2dcert.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6d75c369\x2d887c\x2d42d2\x2d94c1\x2d40cd36f882c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhxvk.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/6d75c369-887c-42d2-94c1-40cd36f882c3/volumes/kubernetes.io~projected/kube-api-access-xhxvk. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6d75c369\x2d887c\x2d42d2\x2d94c1\x2d40cd36f882c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhxvk.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51-userdata-shm.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51/userdata/shm. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-354c29e9\x2d705c\x2d4f87\x2d93ce\x2d2b33c1ed2903.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/354c29e9-705c-4f87-93ce-2b33c1ed2903. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-354c29e9\x2d705c\x2d4f87\x2d93ce\x2d2b33c1ed2903.mount: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9e149a125370c0ad754b2613788b83fa6545bf7bf57711f7193a1b3e375af4dc-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/9e149a125370c0ad754b2613788b83fa6545bf7bf57711f7193a1b3e375af4dc/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9e149a125370c0ad754b2613788b83fa6545bf7bf57711f7193a1b3e375af4dc-merged.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-47731151\x2dd6c2\x2d4983\x2dad0a\x2d4b809b7855d3.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/47731151-d6c2-4983-ad0a-4b809b7855d3. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-47731151\x2dd6c2\x2d4983\x2dad0a\x2d4b809b7855d3.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-f36753b3\x2d0496\x2d4a07\x2d9706\x2db1775a079ccf.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/f36753b3-0496-4a07-9706-b1775a079ccf. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-f36753b3\x2d0496\x2d4a07\x2d9706\x2db1775a079ccf.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-01bf971f\x2d641c\x2d4c4c\x2d8b63\x2d110b0780e79c.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/01bf971f-641c-4c4c-8b63-110b0780e79c. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-01bf971f\x2d641c\x2d4c4c\x2d8b63\x2d110b0780e79c.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-a17440cb\x2d5d23\x2d467a\x2db4af\x2d09ce6ea96f63.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/a17440cb-5d23-467a-b4af-09ce6ea96f63. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-a17440cb\x2d5d23\x2d467a\x2db4af\x2d09ce6ea96f63.mount: Consumed 0 CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm29j2.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/b97e7fe5-fe52-4769-bb52-fc233e05c05e/volumes/kubernetes.io~projected/kube-api-access-m29j2. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm29j2.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3a9bbd2e4a30ee925c19009671172e4b08a443f5a6e78f350c74b69a700a00a2-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/3a9bbd2e4a30ee925c19009671172e4b08a443f5a6e78f350c74b69a700a00a2/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3a9bbd2e4a30ee925c19009671172e4b08a443f5a6e78f350c74b69a700a00a2-merged.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7633633f89b599dee7515b689fdc5f26ca3883822a80a39ce0fd2c9e57c57ece-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7633633f89b599dee7515b689fdc5f26ca3883822a80a39ce0fd2c9e57c57ece/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7633633f89b599dee7515b689fdc5f26ca3883822a80a39ce0fd2c9e57c57ece-merged.mount: Consumed 4ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-47731151\x2dd6c2\x2d4983\x2dad0a\x2d4b809b7855d3.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/47731151-d6c2-4983-ad0a-4b809b7855d3. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-47731151\x2dd6c2\x2d4983\x2dad0a\x2d4b809b7855d3.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7c086642764b421fbd08c314a9efbfbb080e5b836433080e02d319743ea71043-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7c086642764b421fbd08c314a9efbfbb080e5b836433080e02d319743ea71043/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7c086642764b421fbd08c314a9efbfbb080e5b836433080e02d319743ea71043-merged.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4d1130cef66f4edf487bcc90f69a7b09bfcce530c8486128a34bc727b397bf01-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/4d1130cef66f4edf487bcc90f69a7b09bfcce530c8486128a34bc727b397bf01/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4d1130cef66f4edf487bcc90f69a7b09bfcce530c8486128a34bc727b397bf01-merged.mount: Consumed 1ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae-userdata-shm.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae/userdata/shm. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7esecret-ovn\x2dcert.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~secret/ovn-cert. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7esecret-ovn\x2dcert.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-a92af5d6\x2d48f2\x2d4cbc\x2dab67\x2d5e7aee609bd3.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/a92af5d6-48f2-4cbc-ab67-5e7aee609bd3. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-a92af5d6\x2d48f2\x2d4cbc\x2dab67\x2d5e7aee609bd3.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: tmp.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted Temporary Directory (/tmp). Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: tmp.mount: Consumed 4ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-a92af5d6\x2d48f2\x2d4cbc\x2dab67\x2d5e7aee609bd3.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/a92af5d6-48f2-4cbc-ab67-5e7aee609bd3. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-a92af5d6\x2d48f2\x2d4cbc\x2dab67\x2d5e7aee609bd3.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523-userdata-shm.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523/userdata/shm. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9xlt.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~projected/kube-api-access-k9xlt. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9xlt.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-464ecae5\x2dd083\x2d4bf5\x2d84a3\x2daf8c8873c68a.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/464ecae5-d083-4bf5-84a3-af8c8873c68a. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-464ecae5\x2dd083\x2d4bf5\x2d84a3\x2daf8c8873c68a.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6e1ea68f079a62aa982468feafcd844920267b65402c83b7922464973ec223c7-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/6e1ea68f079a62aa982468feafcd844920267b65402c83b7922464973ec223c7/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6e1ea68f079a62aa982468feafcd844920267b65402c83b7922464973ec223c7-merged.mount: Consumed 4ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f595375d8fa5630fb66c832a0c6dc7df44c862c27aece75fdb79c68a850025f6-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/f595375d8fa5630fb66c832a0c6dc7df44c862c27aece75fdb79c68a850025f6/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f595375d8fa5630fb66c832a0c6dc7df44c862c27aece75fdb79c68a850025f6-merged.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-01bf971f\x2d641c\x2d4c4c\x2d8b63\x2d110b0780e79c.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/01bf971f-641c-4c4c-8b63-110b0780e79c. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-01bf971f\x2d641c\x2d4c4c\x2d8b63\x2d110b0780e79c.mount: Consumed 325us CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7b9c6b3d00b3819acdd248deb5ed2bd6f24e93461f0555a15396482c5c86cfe9-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7b9c6b3d00b3819acdd248deb5ed2bd6f24e93461f0555a15396482c5c86cfe9/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7b9c6b3d00b3819acdd248deb5ed2bd6f24e93461f0555a15396482c5c86cfe9-merged.mount: Consumed 4ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-20a20364b8c3e52de551f4398717962d6c821f770b10b2d6b423c857c94eb3c0-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/20a20364b8c3e52de551f4398717962d6c821f770b10b2d6b423c857c94eb3c0/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-20a20364b8c3e52de551f4398717962d6c821f770b10b2d6b423c857c94eb3c0-merged.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-23fd74edf1666b7f623c04fd2c507e505570f30a4df330c5f5d71886ee37fc08-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/23fd74edf1666b7f623c04fd2c507e505570f30a4df330c5f5d71886ee37fc08/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-23fd74edf1666b7f623c04fd2c507e505570f30a4df330c5f5d71886ee37fc08-merged.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b2429a3f5b64deede9db8fad30493d0dafb368702a63d66509ff1ff2870e46aa-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/b2429a3f5b64deede9db8fad30493d0dafb368702a63d66509ff1ff2870e46aa/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b2429a3f5b64deede9db8fad30493d0dafb368702a63d66509ff1ff2870e46aa-merged.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-9f568b48\x2d2486\x2d4809\x2d94e3\x2d236af56c4fde.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/9f568b48-2486-4809-94e3-236af56c4fde. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-9f568b48\x2d2486\x2d4809\x2d94e3\x2d236af56c4fde.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-a92af5d6\x2d48f2\x2d4cbc\x2dab67\x2d5e7aee609bd3.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/a92af5d6-48f2-4cbc-ab67-5e7aee609bd3. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-a92af5d6\x2d48f2\x2d4cbc\x2dab67\x2d5e7aee609bd3.mount: Consumed 3ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8-userdata-shm.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8/userdata/shm. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-f36753b3\x2d0496\x2d4a07\x2d9706\x2db1775a079ccf.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/f36753b3-0496-4a07-9706-b1775a079ccf. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-ipcns-f36753b3\x2d0496\x2d4a07\x2d9706\x2db1775a079ccf.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a-userdata-shm.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a/userdata/shm. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cea6dd1ecfb23dea48f49ae3dd4c4e250c27e96b23e56a98595b4fbcd96f0b3c-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/cea6dd1ecfb23dea48f49ae3dd4c4e250c27e96b23e56a98595b4fbcd96f0b3c/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cea6dd1ecfb23dea48f49ae3dd4c4e250c27e96b23e56a98595b4fbcd96f0b3c-merged.mount: Consumed 4ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a704838c\x2daeb5\x2d4709\x2db91c\x2d2460423203a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnfmxf.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/a704838c-aeb5-4709-b91c-2460423203a4/volumes/kubernetes.io~projected/kube-api-access-nfmxf. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a704838c\x2daeb5\x2d4709\x2db91c\x2d2460423203a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnfmxf.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-5acce570\x2d9f3b\x2d4dab\x2d9fed\x2d169a4c110f8c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7nhww.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/5acce570-9f3b-4dab-9fed-169a4c110f8c/volumes/kubernetes.io~projected/kube-api-access-7nhww. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-5acce570\x2d9f3b\x2d4dab\x2d9fed\x2d169a4c110f8c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7nhww.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689-userdata-shm.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689/userdata/shm. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-f43e09c9\x2d4659\x2d423a\x2d8351\x2d05c8907bbf9e.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/f43e09c9-4659-423a-8351-05c8907bbf9e. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-f43e09c9\x2d4659\x2d423a\x2d8351\x2d05c8907bbf9e.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d796v8.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volumes/kubernetes.io~projected/kube-api-access-796v8. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d796v8.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f512c993f2ca390627884fc724e385f2d543c13575feb9b66ffac2b8c8a86251-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/f512c993f2ca390627884fc724e385f2d543c13575feb9b66ffac2b8c8a86251/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f512c993f2ca390627884fc724e385f2d543c13575feb9b66ffac2b8c8a86251-merged.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5629c7be92597f7c15dc074dd999aaf1a403af48a317a9bd01511a952b00ce3a-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/5629c7be92597f7c15dc074dd999aaf1a403af48a317a9bd01511a952b00ce3a/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5629c7be92597f7c15dc074dd999aaf1a403af48a317a9bd01511a952b00ce3a-merged.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-dfc40c07\x2d9fb1\x2d4de9\x2d81fb\x2d90a7dd4c4c33.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/dfc40c07-9fb1-4de9-81fb-90a7dd4c4c33. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-netns-dfc40c07\x2d9fb1\x2d4de9\x2d81fb\x2d90a7dd4c4c33.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b391296ed20fae68e693dee18e0bc5cb4fb310384ef52d1b84452ba589739264-merged.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/b391296ed20fae68e693dee18e0bc5cb4fb310384ef52d1b84452ba589739264/merged. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b391296ed20fae68e693dee18e0bc5cb4fb310384ef52d1b84452ba589739264-merged.mount: Consumed 4ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-c072a683\x2d1031\x2d40cb\x2da1bc\x2d1dac71bca46b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2zwz.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/c072a683-1031-40cb-a1bc-1dac71bca46b/volumes/kubernetes.io~projected/kube-api-access-w2zwz. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-c072a683\x2d1031\x2d40cb\x2da1bc\x2d1dac71bca46b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2zwz.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-354c29e9\x2d705c\x2d4f87\x2d93ce\x2d2b33c1ed2903.mount: Succeeded. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/354c29e9-705c-4f87-93ce-2b33c1ed2903. Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: run-utsns-354c29e9\x2d705c\x2d4f87\x2d93ce\x2d2b33c1ed2903.mount: Consumed 2ms CPU time Feb 23 16:32:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ab12c9622b2fba7190f39d7e8e2ae9e65f89f9cd77d3927c14c73c92d8e98666-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/ab12c9622b2fba7190f39d7e8e2ae9e65f89f9cd77d3927c14c73c92d8e98666/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ab12c9622b2fba7190f39d7e8e2ae9e65f89f9cd77d3927c14c73c92d8e98666-merged.mount: Consumed 3ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-f36753b3\x2d0496\x2d4a07\x2d9706\x2db1775a079ccf.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/f36753b3-0496-4a07-9706-b1775a079ccf. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-f36753b3\x2d0496\x2d4a07\x2d9706\x2db1775a079ccf.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-47731151\x2dd6c2\x2d4983\x2dad0a\x2d4b809b7855d3.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/47731151-d6c2-4983-ad0a-4b809b7855d3. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-47731151\x2dd6c2\x2d4983\x2dad0a\x2d4b809b7855d3.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f95af646667e1037739d96573aaa20c61f1141f9784b3fd3e6a6b2d5ea68cc5c-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/f95af646667e1037739d96573aaa20c61f1141f9784b3fd3e6a6b2d5ea68cc5c/merged. Feb 23 16:32:23 ip-10-0-136-68 kernel: device 35539c92883319b left promiscuous mode Feb 23 16:32:23 ip-10-0-136-68 kernel: device f879576786b0889 left promiscuous mode Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f95af646667e1037739d96573aaa20c61f1141f9784b3fd3e6a6b2d5ea68cc5c-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: boot.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted CoreOS Dynamic Mount for /boot. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: boot.mount: Consumed 32ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ecd261a9\x2d4d88\x2d4e3d\x2daa47\x2d803a685b6569-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqpfc.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/ecd261a9-4d88-4e3d-aa47-803a685b6569/volumes/kubernetes.io~projected/kube-api-access-jqpfc. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ecd261a9\x2d4d88\x2d4e3d\x2daa47\x2d803a685b6569-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqpfc.mount: Consumed 0 CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-01bf971f\x2d641c\x2d4c4c\x2d8b63\x2d110b0780e79c.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/01bf971f-641c-4c4c-8b63-110b0780e79c. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-01bf971f\x2d641c\x2d4c4c\x2d8b63\x2d110b0780e79c.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-568df1ebed3c04be71e1befd88dfbffef971e4ebb74a8e731bee0b249b95827a-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/568df1ebed3c04be71e1befd88dfbffef971e4ebb74a8e731bee0b249b95827a/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-568df1ebed3c04be71e1befd88dfbffef971e4ebb74a8e731bee0b249b95827a-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-2.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/2. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-2.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6-userdata-shm.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6/userdata/shm. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-544f7e4f4a14b546895e6092bc24f36ac846b68f9b5431aefd8a515e23ecd225-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/544f7e4f4a14b546895e6092bc24f36ac846b68f9b5431aefd8a515e23ecd225/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-544f7e4f4a14b546895e6092bc24f36ac846b68f9b5431aefd8a515e23ecd225-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-bd432261\x2dd919\x2d463e\x2d9ad8\x2d453be2170666.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/bd432261-d919-463e-9ad8-453be2170666. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-bd432261\x2dd919\x2d463e\x2d9ad8\x2d453be2170666.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7esecret-proxy\x2dtls.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/b97e7fe5-fe52-4769-bb52-fc233e05c05e/volumes/kubernetes.io~secret/proxy-tls. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7esecret-proxy\x2dtls.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-c072a683\x2d1031\x2d40cb\x2da1bc\x2d1dac71bca46b-volumes-kubernetes.io\x7esecret-metrics\x2dtls.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/c072a683-1031-40cb-a1bc-1dac71bca46b/volumes/kubernetes.io~secret/metrics-tls. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-c072a683\x2d1031\x2d40cb\x2da1bc\x2d1dac71bca46b-volumes-kubernetes.io\x7esecret-metrics\x2dtls.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-69f64f4253dff47ba9de3dcd1c2aea3269c60d279d12aefdf50fb260cb8c552f-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/69f64f4253dff47ba9de3dcd1c2aea3269c60d279d12aefdf50fb260cb8c552f/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-69f64f4253dff47ba9de3dcd1c2aea3269c60d279d12aefdf50fb260cb8c552f-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-dfc40c07\x2d9fb1\x2d4de9\x2d81fb\x2d90a7dd4c4c33.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/dfc40c07-9fb1-4de9-81fb-90a7dd4c4c33. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-dfc40c07\x2d9fb1\x2d4de9\x2d81fb\x2d90a7dd4c4c33.mount: Consumed 0 CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a23aeb1ce5c6c8d147796bd254d43813e242520cc87824e11e436c180aac15e4-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/a23aeb1ce5c6c8d147796bd254d43813e242520cc87824e11e436c180aac15e4/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a23aeb1ce5c6c8d147796bd254d43813e242520cc87824e11e436c180aac15e4-merged.mount: Consumed 4ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a25a2b70542ad5369992005ff9d6eb8feb0484c470d84115d29291052bea4056-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/a25a2b70542ad5369992005ff9d6eb8feb0484c470d84115d29291052bea4056/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a25a2b70542ad5369992005ff9d6eb8feb0484c470d84115d29291052bea4056-merged.mount: Consumed 4ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-9f568b48\x2d2486\x2d4809\x2d94e3\x2d236af56c4fde.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/9f568b48-2486-4809-94e3-236af56c4fde. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-9f568b48\x2d2486\x2d4809\x2d94e3\x2d236af56c4fde.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b2a22bdbbbc90eea0272c2e1c2cf6e0fa74534dd90996c110ae5799f1aed728a-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/b2a22bdbbbc90eea0272c2e1c2cf6e0fa74534dd90996c110ae5799f1aed728a/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b2a22bdbbbc90eea0272c2e1c2cf6e0fa74534dd90996c110ae5799f1aed728a-merged.mount: Consumed 3ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5dbbfbda12f17fc2d42b003060269e1dd8760c9b5d9acdad39598a0a44b7bf7b-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/5dbbfbda12f17fc2d42b003060269e1dd8760c9b5d9acdad39598a0a44b7bf7b/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5dbbfbda12f17fc2d42b003060269e1dd8760c9b5d9acdad39598a0a44b7bf7b-merged.mount: Consumed 3ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-dfc40c07\x2d9fb1\x2d4de9\x2d81fb\x2d90a7dd4c4c33.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/dfc40c07-9fb1-4de9-81fb-90a7dd4c4c33. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-dfc40c07\x2d9fb1\x2d4de9\x2d81fb\x2d90a7dd4c4c33.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-fe9ac55c\x2d60a6\x2d4c99\x2d8e53\x2d9a8d9c2dc37f.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-fe9ac55c\x2d60a6\x2d4c99\x2d8e53\x2d9a8d9c2dc37f.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-a17440cb\x2d5d23\x2d467a\x2db4af\x2d09ce6ea96f63.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/a17440cb-5d23-467a-b4af-09ce6ea96f63. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-a17440cb\x2d5d23\x2d467a\x2db4af\x2d09ce6ea96f63.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32-userdata-shm.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32/userdata/shm. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-9f568b48\x2d2486\x2d4809\x2d94e3\x2d236af56c4fde.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/9f568b48-2486-4809-94e3-236af56c4fde. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-9f568b48\x2d2486\x2d4809\x2d94e3\x2d236af56c4fde.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ffd2cee3\x2d1bae\x2d4941\x2d8015\x2d2b3ade383d85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv4glw.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/ffd2cee3-1bae-4941-8015-2b3ade383d85/volumes/kubernetes.io~projected/kube-api-access-v4glw. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ffd2cee3\x2d1bae\x2d4941\x2d8015\x2d2b3ade383d85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv4glw.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df-userdata-shm.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df/userdata/shm. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvdk85.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes/kubernetes.io~projected/kube-api-access-vdk85. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvdk85.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f22d712b81d8e6e0eb06eb91d11f5533189fdf2914faf7e9327a5e72babfada6-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/f22d712b81d8e6e0eb06eb91d11f5533189fdf2914faf7e9327a5e72babfada6/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f22d712b81d8e6e0eb06eb91d11f5533189fdf2914faf7e9327a5e72babfada6-merged.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-26f9dc1311c3093439b273e7b58e89ee4c668124075662220b2589676b267f8a-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/26f9dc1311c3093439b273e7b58e89ee4c668124075662220b2589676b267f8a/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-26f9dc1311c3093439b273e7b58e89ee4c668124075662220b2589676b267f8a-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ce52c6d5ab7bed85dfe418f74795d0c24238cbe314313161f91a5ca58ca9d33c-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/ce52c6d5ab7bed85dfe418f74795d0c24238cbe314313161f91a5ca58ca9d33c/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ce52c6d5ab7bed85dfe418f74795d0c24238cbe314313161f91a5ca58ca9d33c-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: etc.mount: Mount process exited, code=exited status=32 Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Failed unmounting /etc. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-249ca568c059c3c31a6547a67cd9545477b86ba84fdd963e017cec47ecd718b8-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/249ca568c059c3c31a6547a67cd9545477b86ba84fdd963e017cec47ecd718b8/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-249ca568c059c3c31a6547a67cd9545477b86ba84fdd963e017cec47ecd718b8-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9cd26ba5\x2d46e4\x2d40b5\x2d81e6\x2d74079153d58d-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/9cd26ba5-46e4-40b5-81e6-74079153d58d/volumes/kubernetes.io~secret/metrics-certs. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9cd26ba5\x2d46e4\x2d40b5\x2d81e6\x2d74079153d58d-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-fe9ac55c\x2d60a6\x2d4c99\x2d8e53\x2d9a8d9c2dc37f.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-fe9ac55c\x2d60a6\x2d4c99\x2d8e53\x2d9a8d9c2dc37f.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7b49c366c292abe2c7177314a0da44de273f2a0120e8e65dc089460e5631e142-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7b49c366c292abe2c7177314a0da44de273f2a0120e8e65dc089460e5631e142/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7b49c366c292abe2c7177314a0da44de273f2a0120e8e65dc089460e5631e142-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-bd432261\x2dd919\x2d463e\x2d9ad8\x2d453be2170666.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/bd432261-d919-463e-9ad8-453be2170666. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-bd432261\x2dd919\x2d463e\x2d9ad8\x2d453be2170666.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c-userdata-shm.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c/userdata/shm. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3-userdata-shm.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3/userdata/shm. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9cd26ba5\x2d46e4\x2d40b5\x2d81e6\x2d74079153d58d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jwlz.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/9cd26ba5-46e4-40b5-81e6-74079153d58d/volumes/kubernetes.io~projected/kube-api-access-2jwlz. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9cd26ba5\x2d46e4\x2d40b5\x2d81e6\x2d74079153d58d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jwlz.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-bd432261\x2dd919\x2d463e\x2d9ad8\x2d453be2170666.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/bd432261-d919-463e-9ad8-453be2170666. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-bd432261\x2dd919\x2d463e\x2d9ad8\x2d453be2170666.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dtls.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes/kubernetes.io~secret/node-exporter-tls. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dtls.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7esecret-cookie\x2dsecret.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/b97e7fe5-fe52-4769-bb52-fc233e05c05e/volumes/kubernetes.io~secret/cookie-secret. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7esecret-cookie\x2dsecret.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-507b846f\x2deb8a\x2d4ca3\x2d9d5f\x2de4d9f18eca32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74mgq.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32/volumes/kubernetes.io~projected/kube-api-access-74mgq. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-507b846f\x2deb8a\x2d4ca3\x2d9d5f\x2de4d9f18eca32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74mgq.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-354c29e9\x2d705c\x2d4f87\x2d93ce\x2d2b33c1ed2903.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/354c29e9-705c-4f87-93ce-2b33c1ed2903. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-354c29e9\x2d705c\x2d4f87\x2d93ce\x2d2b33c1ed2903.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-66097094\x2d74f3\x2d4cd1\x2db8ec\x2d0513bfaa3c62.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/66097094-74f3-4cd1-b8ec-0513bfaa3c62. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-66097094\x2d74f3\x2d4cd1\x2db8ec\x2d0513bfaa3c62.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-5.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/5. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-5.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-27a8ac717890439a47abda2f3aa1147a8d9d6c0fb418f6e9649ef21bc6318be4-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/27a8ac717890439a47abda2f3aa1147a8d9d6c0fb418f6e9649ef21bc6318be4/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-27a8ac717890439a47abda2f3aa1147a8d9d6c0fb418f6e9649ef21bc6318be4-merged.mount: Consumed 3ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e73587f66d8eed422646b2c62549d9be5763afe225951d91369ae12a6b303e03-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/e73587f66d8eed422646b2c62549d9be5763afe225951d91369ae12a6b303e03/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e73587f66d8eed422646b2c62549d9be5763afe225951d91369ae12a6b303e03-merged.mount: Consumed 4ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-464ecae5\x2dd083\x2d4bf5\x2d84a3\x2daf8c8873c68a.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/464ecae5-d083-4bf5-84a3-af8c8873c68a. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-464ecae5\x2dd083\x2d4bf5\x2d84a3\x2daf8c8873c68a.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-21fbb559e1292c7ac19cbffb6fb1623008f0176fdd3f6f07ef4d80ed3d838ef2-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/21fbb559e1292c7ac19cbffb6fb1623008f0176fdd3f6f07ef4d80ed3d838ef2/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-21fbb559e1292c7ac19cbffb6fb1623008f0176fdd3f6f07ef4d80ed3d838ef2-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-f43e09c9\x2d4659\x2d423a\x2d8351\x2d05c8907bbf9e.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/f43e09c9-4659-423a-8351-05c8907bbf9e. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-f43e09c9\x2d4659\x2d423a\x2d8351\x2d05c8907bbf9e.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-1.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/1. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-1.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-2c47bc3e\x2d0247\x2d4d47\x2d80e3\x2dc168262e7976-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr2sj.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/2c47bc3e-0247-4d47-80e3-c168262e7976/volumes/kubernetes.io~projected/kube-api-access-hr2sj. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-2c47bc3e\x2d0247\x2d4d47\x2d80e3\x2dc168262e7976-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr2sj.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-464ecae5\x2dd083\x2d4bf5\x2d84a3\x2daf8c8873c68a.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/464ecae5-d083-4bf5-84a3-af8c8873c68a. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-464ecae5\x2dd083\x2d4bf5\x2d84a3\x2daf8c8873c68a.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d2d950a0f8b9ffde71d1e4cbda427ef1fed6fb468f7077f20ce123e642198c05-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/d2d950a0f8b9ffde71d1e4cbda427ef1fed6fb468f7077f20ce123e642198c05/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d2d950a0f8b9ffde71d1e4cbda427ef1fed6fb468f7077f20ce123e642198c05-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-fe9ac55c\x2d60a6\x2d4c99\x2d8e53\x2d9a8d9c2dc37f.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-ipcns-fe9ac55c\x2d60a6\x2d4c99\x2d8e53\x2d9a8d9c2dc37f.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a3c2cdc1af54823b044da7b3ab7a29fd1485f64080f9468f0d77bd995100b22e-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/a3c2cdc1af54823b044da7b3ab7a29fd1485f64080f9468f0d77bd995100b22e/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a3c2cdc1af54823b044da7b3ab7a29fd1485f64080f9468f0d77bd995100b22e-merged.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-87ba02363226f8f3b0bddd670fae380b9ce71e79d85572cf26d126bb9ebb2072-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/87ba02363226f8f3b0bddd670fae380b9ce71e79d85572cf26d126bb9ebb2072/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-87ba02363226f8f3b0bddd670fae380b9ce71e79d85572cf26d126bb9ebb2072-merged.mount: Consumed 3ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-f43e09c9\x2d4659\x2d423a\x2d8351\x2d05c8907bbf9e.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/f43e09c9-4659-423a-8351-05c8907bbf9e. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-f43e09c9\x2d4659\x2d423a\x2d8351\x2d05c8907bbf9e.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1bc24f02835b1e93a13159fa95d185182e23683e12a18a5e6851c7822ce97bcb-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/1bc24f02835b1e93a13159fa95d185182e23683e12a18a5e6851c7822ce97bcb/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1bc24f02835b1e93a13159fa95d185182e23683e12a18a5e6851c7822ce97bcb-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-452e7667d68427217dcce488a05adc04ca1d27e12ab1d54d7fccf4072c0b05b0-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/452e7667d68427217dcce488a05adc04ca1d27e12ab1d54d7fccf4072c0b05b0/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-452e7667d68427217dcce488a05adc04ca1d27e12ab1d54d7fccf4072c0b05b0-merged.mount: Consumed 3ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b50327aa9703ecc1c3d4814922ae98a68046e74b46d1b3978aa04ac9f0271e57-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/b50327aa9703ecc1c3d4814922ae98a68046e74b46d1b3978aa04ac9f0271e57/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b50327aa9703ecc1c3d4814922ae98a68046e74b46d1b3978aa04ac9f0271e57-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-66097094\x2d74f3\x2d4cd1\x2db8ec\x2d0513bfaa3c62.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/66097094-74f3-4cd1-b8ec-0513bfaa3c62. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-66097094\x2d74f3\x2d4cd1\x2db8ec\x2d0513bfaa3c62.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-a17440cb\x2d5d23\x2d467a\x2db4af\x2d09ce6ea96f63.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/a17440cb-5d23-467a-b4af-09ce6ea96f63. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-utsns-a17440cb\x2d5d23\x2d467a\x2db4af\x2d09ce6ea96f63.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-68a377c9d9a8dfbc931b66258cde02191ae1bc3efa5f31d8bb11b155341841c8-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/68a377c9d9a8dfbc931b66258cde02191ae1bc3efa5f31d8bb11b155341841c8/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-68a377c9d9a8dfbc931b66258cde02191ae1bc3efa5f31d8bb11b155341841c8-merged.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2bb2cb8a00dccaff71de868c0f190080f99a4b0115232714827c8a70c95ddda6-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/2bb2cb8a00dccaff71de868c0f190080f99a4b0115232714827c8a70c95ddda6/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2bb2cb8a00dccaff71de868c0f190080f99a4b0115232714827c8a70c95ddda6-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-693af76d844dbc3dd375d090be3aa7ff7b7e87282c58e3ab90848ef3c83e84de-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/693af76d844dbc3dd375d090be3aa7ff7b7e87282c58e3ab90848ef3c83e84de/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-693af76d844dbc3dd375d090be3aa7ff7b7e87282c58e3ab90848ef3c83e84de-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-4.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/4. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-4.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7dcfd4f833fc8daa144e936ac667c1de6ff8e2f8d65c22cc9570d5ff2e426158-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7dcfd4f833fc8daa144e936ac667c1de6ff8e2f8d65c22cc9570d5ff2e426158/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7dcfd4f833fc8daa144e936ac667c1de6ff8e2f8d65c22cc9570d5ff2e426158-merged.mount: Consumed 3ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb-userdata-shm.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb/userdata/shm. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb-userdata-shm.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-66097094\x2d74f3\x2d4cd1\x2db8ec\x2d0513bfaa3c62.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/66097094-74f3-4cd1-b8ec-0513bfaa3c62. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: run-netns-66097094\x2d74f3\x2d4cd1\x2db8ec\x2d0513bfaa3c62.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-3.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volume-subpaths/etc/tuned/3. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-3.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d9ea34e75000a6ffebd196e553aef957d988ff9371f1654e56f49e17ece421ce-merged.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/d9ea34e75000a6ffebd196e553aef957d988ff9371f1654e56f49e17ece421ce/merged. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d9ea34e75000a6ffebd196e553aef957d988ff9371f1654e56f49e17ece421ce-merged.mount: Consumed 2ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f.service: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Stopped File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f.service: Consumed 0 CPU time Feb 23 16:32:23 ip-10-0-136-68 kernel: audit: type=1130 audit(1677169943.226:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Removed slice system-systemd\x2dfsck.slice. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: system-systemd\x2dfsck.slice: Consumed 11ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay... Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Stopped target Swap. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 1ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounting /var... Feb 23 16:32:23 ip-10-0-136-68 umount[79173]: umount: /var: target is busy. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: var.mount: Mount process exited, code=exited status=32 Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Failed unmounting /var. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounting sysroot.mount... Feb 23 16:32:23 ip-10-0-136-68 umount[79175]: umount: /sysroot: target is busy. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounting sysroot-ostree-deploy-rhcos-var.mount... Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: sysroot.mount: Mount process exited, code=exited status=32 Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Failed unmounting sysroot.mount. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Unmounted sysroot-ostree-deploy-rhcos-var.mount. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: sysroot-ostree-deploy-rhcos-var.mount: Consumed 0 CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Reached target Unmount All Filesystems. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems (Pre). Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Stopping Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup-dev.service: Consumed 0 CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: systemd-sysusers.service: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Stopped Create System Users. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: systemd-sysusers.service: Consumed 0 CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: lvm2-monitor.service: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Stopped Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: lvm2-monitor.service: Consumed 11ms CPU time Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Reached target Shutdown. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Reached target Final Step. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: systemd-reboot.service: Succeeded. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Started Reboot. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Reached target Reboot. Feb 23 16:32:23 ip-10-0-136-68 systemd[1]: Shutting down. Feb 23 16:32:23 ip-10-0-136-68 systemd-shutdown[1]: Syncing filesystems and block devices. Feb 23 16:32:23 ip-10-0-136-68 systemd-shutdown[1]: Sending SIGTERM to remaining processes... Feb 23 16:32:23 ip-10-0-136-68 systemd-journald[793]: Journal stopped -- Boot 90ff0a1b14a9469d904dd0496e06da13 -- Feb 23 16:32:33 localhost kernel: Linux version 4.18.0-372.43.1.rt7.200.el8_6.x86_64 (mockbuild@x86-vm-07.build.eng.bos.redhat.com) (gcc version 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC)) #1 SMP PREEMPT_RT Fri Jan 27 20:35:35 EST 2023 Feb 23 16:32:33 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-10ea79ead463d85f33b5d8ccf85d20209afb59c92ac52a79bea2babd3816e310/vmlinuz-4.18.0-372.43.1.rt7.200.el8_6.x86_64 ostree=/ostree/boot.0/rhcos/10ea79ead463d85f33b5d8ccf85d20209afb59c92ac52a79bea2babd3816e310/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f Feb 23 16:32:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 23 16:32:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 23 16:32:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 23 16:32:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 23 16:32:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 23 16:32:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 23 16:32:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 23 16:32:33 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 23 16:32:33 localhost kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 23 16:32:33 localhost kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 23 16:32:33 localhost kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 23 16:32:33 localhost kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Feb 23 16:32:33 localhost kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Feb 23 16:32:33 localhost kernel: signal: max sigframe size: 3632 Feb 23 16:32:33 localhost kernel: BIOS-provided physical RAM map: Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffe8fff] usable Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x00000000bffe9000-0x00000000bfffffff] reserved Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000042effffff] usable Feb 23 16:32:33 localhost kernel: BIOS-e820: [mem 0x000000042f000000-0x000000043fffffff] reserved Feb 23 16:32:33 localhost kernel: NX (Execute Disable) protection: active Feb 23 16:32:33 localhost kernel: SMBIOS 2.7 present. Feb 23 16:32:33 localhost kernel: DMI: Amazon EC2 m6i.xlarge/, BIOS 1.0 10/16/2017 Feb 23 16:32:33 localhost kernel: Hypervisor detected: KVM Feb 23 16:32:33 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 23 16:32:33 localhost kernel: kvm-clock: cpu 0, msr 17de01001, primary cpu clock Feb 23 16:32:33 localhost kernel: kvm-clock: using sched offset of 7562404426 cycles Feb 23 16:32:33 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 23 16:32:33 localhost kernel: tsc: Detected 2899.998 MHz processor Feb 23 16:32:33 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 23 16:32:33 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 23 16:32:33 localhost kernel: last_pfn = 0x42f000 max_arch_pfn = 0x400000000 Feb 23 16:32:33 localhost kernel: MTRR default type: write-back Feb 23 16:32:33 localhost kernel: MTRR fixed ranges enabled: Feb 23 16:32:33 localhost kernel: 00000-9FFFF write-back Feb 23 16:32:33 localhost kernel: A0000-BFFFF uncachable Feb 23 16:32:33 localhost kernel: C0000-FFFFF write-protect Feb 23 16:32:33 localhost kernel: MTRR variable ranges enabled: Feb 23 16:32:33 localhost kernel: 0 base 0000C0000000 mask 3FFFC0000000 uncachable Feb 23 16:32:33 localhost kernel: 1 disabled Feb 23 16:32:33 localhost kernel: 2 disabled Feb 23 16:32:33 localhost kernel: 3 disabled Feb 23 16:32:33 localhost kernel: 4 disabled Feb 23 16:32:33 localhost kernel: 5 disabled Feb 23 16:32:33 localhost kernel: 6 disabled Feb 23 16:32:33 localhost kernel: 7 disabled Feb 23 16:32:33 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 23 16:32:33 localhost kernel: last_pfn = 0xbffe9 max_arch_pfn = 0x400000000 Feb 23 16:32:33 localhost kernel: Using GB pages for direct mapping Feb 23 16:32:33 localhost kernel: BRK [0x17e001000, 0x17e001fff] PGTABLE Feb 23 16:32:33 localhost kernel: BRK [0x17e002000, 0x17e002fff] PGTABLE Feb 23 16:32:33 localhost kernel: BRK [0x17e003000, 0x17e003fff] PGTABLE Feb 23 16:32:33 localhost kernel: BRK [0x17e004000, 0x17e004fff] PGTABLE Feb 23 16:32:33 localhost kernel: BRK [0x17e005000, 0x17e005fff] PGTABLE Feb 23 16:32:33 localhost kernel: BRK [0x17e006000, 0x17e006fff] PGTABLE Feb 23 16:32:33 localhost kernel: BRK [0x17e007000, 0x17e007fff] PGTABLE Feb 23 16:32:33 localhost kernel: RAMDISK: [mem 0x2d805000-0x32bfafff] Feb 23 16:32:33 localhost kernel: ACPI: Early table checksum verification disabled Feb 23 16:32:33 localhost kernel: ACPI: RSDP 0x00000000000F8F00 000014 (v00 AMAZON) Feb 23 16:32:33 localhost kernel: ACPI: RSDT 0x00000000BFFEE180 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: WAET 0x00000000BFFEFFC0 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: SLIT 0x00000000BFFEFF40 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: APIC 0x00000000BFFEFE80 000086 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: SRAT 0x00000000BFFEFDC0 0000C0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: FACP 0x00000000BFFEFC80 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: DSDT 0x00000000BFFEEAC0 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: FACS 0x00000000000F8EC0 000040 Feb 23 16:32:33 localhost kernel: ACPI: HPET 0x00000000BFFEFC40 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: SSDT 0x00000000BFFEE280 00081F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: SSDT 0x00000000BFFEE200 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 23 16:32:33 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffeffc0-0xbffeffe7] Feb 23 16:32:33 localhost kernel: ACPI: Reserving SLIT table memory at [mem 0xbffeff40-0xbffeffab] Feb 23 16:32:33 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffefe80-0xbffeff05] Feb 23 16:32:33 localhost kernel: ACPI: Reserving SRAT table memory at [mem 0xbffefdc0-0xbffefe7f] Feb 23 16:32:33 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffefc80-0xbffefd93] Feb 23 16:32:33 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffeeac0-0xbffefc19] Feb 23 16:32:33 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xf8ec0-0xf8eff] Feb 23 16:32:33 localhost kernel: ACPI: Reserving HPET table memory at [mem 0xbffefc40-0xbffefc77] Feb 23 16:32:33 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0xbffee280-0xbffeea9e] Feb 23 16:32:33 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0xbffee200-0xbffee27e] Feb 23 16:32:33 localhost kernel: ACPI: Local APIC address 0xfee00000 Feb 23 16:32:33 localhost kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 23 16:32:33 localhost kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 23 16:32:33 localhost kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 23 16:32:33 localhost kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 23 16:32:33 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0xbfffffff] Feb 23 16:32:33 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x43fffffff] Feb 23 16:32:33 localhost kernel: NUMA: Initialized distance table, cnt=1 Feb 23 16:32:33 localhost kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x42effffff] -> [mem 0x00000000-0x42effffff] Feb 23 16:32:33 localhost kernel: NODE_DATA(0) allocated [mem 0x42efd4000-0x42effefff] Feb 23 16:32:33 localhost kernel: Zone ranges: Feb 23 16:32:33 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 23 16:32:33 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 23 16:32:33 localhost kernel: Normal [mem 0x0000000100000000-0x000000042effffff] Feb 23 16:32:33 localhost kernel: Device empty Feb 23 16:32:33 localhost kernel: Movable zone start for each node Feb 23 16:32:33 localhost kernel: Early memory node ranges Feb 23 16:32:33 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 23 16:32:33 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffe8fff] Feb 23 16:32:33 localhost kernel: node 0: [mem 0x0000000100000000-0x000000042effffff] Feb 23 16:32:33 localhost kernel: Zeroed struct page in unavailable ranges: 4217 pages Feb 23 16:32:33 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000042effffff] Feb 23 16:32:33 localhost kernel: On node 0 totalpages: 4124551 Feb 23 16:32:33 localhost kernel: DMA zone: 64 pages used for memmap Feb 23 16:32:33 localhost kernel: DMA zone: 158 pages reserved Feb 23 16:32:33 localhost kernel: DMA zone: 3998 pages, LIFO batch:0 Feb 23 16:32:33 localhost kernel: DMA32 zone: 12224 pages used for memmap Feb 23 16:32:33 localhost kernel: DMA32 zone: 782313 pages, LIFO batch:63 Feb 23 16:32:33 localhost kernel: Normal zone: 52160 pages used for memmap Feb 23 16:32:33 localhost kernel: Normal zone: 3338240 pages, LIFO batch:63 Feb 23 16:32:33 localhost kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 23 16:32:33 localhost kernel: ACPI: Local APIC address 0xfee00000 Feb 23 16:32:33 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 23 16:32:33 localhost kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 23 16:32:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 23 16:32:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 23 16:32:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 23 16:32:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 23 16:32:33 localhost kernel: ACPI: IRQ5 used by override. Feb 23 16:32:33 localhost kernel: ACPI: IRQ9 used by override. Feb 23 16:32:33 localhost kernel: ACPI: IRQ10 used by override. Feb 23 16:32:33 localhost kernel: ACPI: IRQ11 used by override. Feb 23 16:32:33 localhost kernel: Using ACPI (MADT) for SMP configuration information Feb 23 16:32:33 localhost kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 23 16:32:33 localhost kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0x00000000-0x00000fff] Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0x000a0000-0x000effff] Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0xbffe9000-0xbfffffff] Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0xc0000000-0xdfffffff] Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0xe0000000-0xe03fffff] Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0xe0400000-0xfffbffff] Feb 23 16:32:33 localhost kernel: PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Feb 23 16:32:33 localhost kernel: [mem 0xc0000000-0xdfffffff] available for PCI devices Feb 23 16:32:33 localhost kernel: Booting paravirtualized kernel on KVM Feb 23 16:32:33 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 23 16:32:33 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8192 nr_cpu_ids:4 nr_node_ids:1 Feb 23 16:32:33 localhost kernel: percpu: Embedded 54 pages/cpu s184320 r8192 d28672 u524288 Feb 23 16:32:33 localhost kernel: pcpu-alloc: s184320 r8192 d28672 u524288 alloc=1*2097152 Feb 23 16:32:33 localhost kernel: pcpu-alloc: [0] 0 1 2 3 Feb 23 16:32:33 localhost kernel: kvm-guest: stealtime: cpu 0, msr 41f02c080 Feb 23 16:32:33 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Feb 23 16:32:33 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4059945 Feb 23 16:32:33 localhost kernel: Policy zone: Normal Feb 23 16:32:33 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-10ea79ead463d85f33b5d8ccf85d20209afb59c92ac52a79bea2babd3816e310/vmlinuz-4.18.0-372.43.1.rt7.200.el8_6.x86_64 ostree=/ostree/boot.0/rhcos/10ea79ead463d85f33b5d8ccf85d20209afb59c92ac52a79bea2babd3816e310/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f Feb 23 16:32:33 localhost kernel: Specific versions of hardware are certified with Red Hat Enterprise Linux 8. Please see the list of hardware certified with Red Hat Enterprise Linux 8 at https://catalog.redhat.com. Feb 23 16:32:33 localhost kernel: Memory: 3124172K/16498204K available (12293K kernel code, 5969K rwdata, 8136K rodata, 2444K init, 16404K bss, 463324K reserved, 0K cma-reserved) Feb 23 16:32:33 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 23 16:32:33 localhost kernel: ftrace: allocating 39220 entries in 154 pages Feb 23 16:32:33 localhost kernel: ftrace: allocated 154 pages with 4 groups Feb 23 16:32:33 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Feb 23 16:32:33 localhost kernel: rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4. Feb 23 16:32:33 localhost kernel: rcu: RCU priority boosting: priority 1 delay 500 ms. Feb 23 16:32:33 localhost kernel: rcu: RCU_SOFTIRQ processing moved to rcuc kthreads. Feb 23 16:32:33 localhost kernel: No expedited grace period (rcu_normal_after_boot). Feb 23 16:32:33 localhost kernel: Trampoline variant of Tasks RCU enabled. Feb 23 16:32:33 localhost kernel: Rude variant of Tasks RCU enabled. Feb 23 16:32:33 localhost kernel: Tracing variant of Tasks RCU enabled. Feb 23 16:32:33 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 23 16:32:33 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 23 16:32:33 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16 Feb 23 16:32:33 localhost kernel: rcu: Offload RCU callbacks from CPUs: (none). Feb 23 16:32:33 localhost kernel: random: crng done (trusting CPU's manufacturer) Feb 23 16:32:33 localhost kernel: Console: colour VGA+ 80x25 Feb 23 16:32:33 localhost kernel: printk: console [tty0] enabled Feb 23 16:32:33 localhost kernel: printk: console [ttyS0] enabled Feb 23 16:32:33 localhost kernel: ACPI: Core revision 20210604 Feb 23 16:32:33 localhost kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 23 16:32:33 localhost kernel: APIC: Switch to symmetric I/O mode setup Feb 23 16:32:33 localhost kernel: x2apic enabled Feb 23 16:32:33 localhost kernel: Switched APIC routing to physical x2apic. Feb 23 16:32:33 localhost kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x29cd4133323, max_idle_ns: 440795296220 ns Feb 23 16:32:33 localhost kernel: Calibrating delay loop (skipped) preset value.. 5799.99 BogoMIPS (lpj=2899998) Feb 23 16:32:33 localhost kernel: pid_max: default: 32768 minimum: 301 Feb 23 16:32:33 localhost kernel: LSM: Security Framework initializing Feb 23 16:32:33 localhost kernel: Yama: becoming mindful. Feb 23 16:32:33 localhost kernel: SELinux: Initializing. Feb 23 16:32:33 localhost kernel: LSM support for eBPF active Feb 23 16:32:33 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: x86/tme: enabled by BIOS Feb 23 16:32:33 localhost kernel: x86/mktme: No known encryption algorithm is supported: 0x0 Feb 23 16:32:33 localhost kernel: x86/mktme: disabled by BIOS Feb 23 16:32:33 localhost kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 23 16:32:33 localhost kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 23 16:32:33 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 23 16:32:33 localhost kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 23 16:32:33 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 23 16:32:33 localhost kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 23 16:32:33 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 23 16:32:33 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 23 16:32:33 localhost kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 23 16:32:33 localhost kernel: Freeing SMP alternatives memory: 32K Feb 23 16:32:33 localhost kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1235 Feb 23 16:32:33 localhost kernel: TSC deadline timer enabled Feb 23 16:32:33 localhost kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz (family: 0x6, model: 0x6a, stepping: 0x6) Feb 23 16:32:33 localhost kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Feb 23 16:32:33 localhost kernel: rcu: Hierarchical SRCU implementation. Feb 23 16:32:33 localhost kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 23 16:32:33 localhost kernel: smp: Bringing up secondary CPUs ... Feb 23 16:32:33 localhost kernel: x86: Booting SMP configuration: Feb 23 16:32:33 localhost kernel: .... node #0, CPUs: #1 Feb 23 16:32:33 localhost kernel: kvm-clock: cpu 1, msr 17de01041, secondary cpu clock Feb 23 16:32:33 localhost kernel: kvm-guest: stealtime: cpu 1, msr 41f0ac080 Feb 23 16:32:33 localhost kernel: #2 Feb 23 16:32:33 localhost kernel: kvm-clock: cpu 2, msr 17de01081, secondary cpu clock Feb 23 16:32:33 localhost kernel: kvm-guest: stealtime: cpu 2, msr 41f12c080 Feb 23 16:32:33 localhost kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 23 16:32:33 localhost kernel: #3 Feb 23 16:32:33 localhost kernel: kvm-clock: cpu 3, msr 17de010c1, secondary cpu clock Feb 23 16:32:33 localhost kernel: kvm-guest: stealtime: cpu 3, msr 41f1ac080 Feb 23 16:32:33 localhost kernel: smp: Brought up 1 node, 4 CPUs Feb 23 16:32:33 localhost kernel: smpboot: Max logical packages: 1 Feb 23 16:32:33 localhost kernel: smpboot: Total of 4 processors activated (23199.98 BogoMIPS) Feb 23 16:32:33 localhost kernel: node 0 deferred pages initialised in 22ms Feb 23 16:32:33 localhost kernel: devtmpfs: initialized Feb 23 16:32:33 localhost kernel: x86/mm: Memory block size: 128MB Feb 23 16:32:33 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 23 16:32:33 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: pinctrl core: initialized pinctrl subsystem Feb 23 16:32:33 localhost kernel: NET: Registered protocol family 16 Feb 23 16:32:33 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Feb 23 16:32:33 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 23 16:32:33 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 23 16:32:33 localhost kernel: audit: initializing netlink subsys (disabled) Feb 23 16:32:33 localhost kernel: audit: type=2000 audit(1677169952.354:1): state=initialized audit_enabled=0 res=1 Feb 23 16:32:33 localhost kernel: cpuidle: using governor menu Feb 23 16:32:33 localhost kernel: ACPI: bus type PCI registered Feb 23 16:32:33 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 23 16:32:33 localhost kernel: PCI: Using configuration type 1 for base access Feb 23 16:32:33 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 23 16:32:33 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 23 16:32:33 localhost kernel: cryptd: max_cpu_qlen set to 1000 Feb 23 16:32:33 localhost kernel: ACPI: Added _OSI(Module Device) Feb 23 16:32:33 localhost kernel: ACPI: Added _OSI(Processor Device) Feb 23 16:32:33 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 23 16:32:33 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 23 16:32:33 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 23 16:32:33 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 23 16:32:33 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 23 16:32:33 localhost kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 23 16:32:33 localhost kernel: ACPI: Interpreter enabled Feb 23 16:32:33 localhost kernel: ACPI: PM: (supports S0 S4 S5) Feb 23 16:32:33 localhost kernel: ACPI: Using IOAPIC for interrupt routing Feb 23 16:32:33 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 23 16:32:33 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 23 16:32:33 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 23 16:32:33 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI EDR HPX-Type3] Feb 23 16:32:33 localhost kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 23 16:32:33 localhost kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 23 16:32:33 localhost kernel: acpiphp: Slot [3] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [4] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [5] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [6] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [7] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [8] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [9] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [10] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [11] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [12] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [13] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [14] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [15] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [16] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [17] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [18] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [19] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [20] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [21] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [22] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [23] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [24] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [25] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [26] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [27] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [28] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [29] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [30] registered Feb 23 16:32:33 localhost kernel: acpiphp: Slot [31] registered Feb 23 16:32:33 localhost kernel: PCI host bridge to bus 0000:00 Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x20043fffffff window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 23 16:32:33 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 23 16:32:33 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 23 16:32:33 localhost kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 23 16:32:33 localhost kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 23 16:32:33 localhost kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 23 16:32:33 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 23 16:32:33 localhost kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 23 16:32:33 localhost kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 23 16:32:33 localhost kernel: pci 0000:00:04.0: enabling Extended Tags Feb 23 16:32:33 localhost kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 23 16:32:33 localhost kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf5fff] Feb 23 16:32:33 localhost kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf6000-0xfebf7fff] Feb 23 16:32:33 localhost kernel: pci 0000:00:05.0: reg 0x18: [mem 0xfe800000-0xfe87ffff pref] Feb 23 16:32:33 localhost kernel: pci 0000:00:05.0: enabling Extended Tags Feb 23 16:32:33 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 23 16:32:33 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 23 16:32:33 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 23 16:32:33 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 23 16:32:33 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 23 16:32:33 localhost kernel: iommu: Default domain type: Passthrough Feb 23 16:32:33 localhost kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 23 16:32:33 localhost kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 23 16:32:33 localhost kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 23 16:32:33 localhost kernel: vgaarb: loaded Feb 23 16:32:33 localhost kernel: SCSI subsystem initialized Feb 23 16:32:33 localhost kernel: ACPI: bus type USB registered Feb 23 16:32:33 localhost kernel: usbcore: registered new interface driver usbfs Feb 23 16:32:33 localhost kernel: usbcore: registered new interface driver hub Feb 23 16:32:33 localhost kernel: usbcore: registered new device driver usb Feb 23 16:32:33 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Feb 23 16:32:33 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 23 16:32:33 localhost kernel: PTP clock support registered Feb 23 16:32:33 localhost kernel: EDAC MC: Ver: 3.0.0 Feb 23 16:32:33 localhost kernel: PCI: Using ACPI for IRQ routing Feb 23 16:32:33 localhost kernel: PCI: pci_cache_line_size set to 64 bytes Feb 23 16:32:33 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 23 16:32:33 localhost kernel: e820: reserve RAM buffer [mem 0xbffe9000-0xbfffffff] Feb 23 16:32:33 localhost kernel: e820: reserve RAM buffer [mem 0x42f000000-0x42fffffff] Feb 23 16:32:33 localhost kernel: NetLabel: Initializing Feb 23 16:32:33 localhost kernel: NetLabel: domain hash size = 128 Feb 23 16:32:33 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Feb 23 16:32:33 localhost kernel: NetLabel: unlabeled traffic allowed by default Feb 23 16:32:33 localhost kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 23 16:32:33 localhost kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 23 16:32:33 localhost kernel: clocksource: Switched to clocksource kvm-clock Feb 23 16:32:33 localhost kernel: VFS: Disk quotas dquot_6.6.0 Feb 23 16:32:33 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 23 16:32:33 localhost kernel: pnp: PnP ACPI init Feb 23 16:32:33 localhost kernel: pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) Feb 23 16:32:33 localhost kernel: pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active) Feb 23 16:32:33 localhost kernel: pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active) Feb 23 16:32:33 localhost kernel: pnp 00:03: Plug and Play ACPI device, IDs PNP0400 (active) Feb 23 16:32:33 localhost kernel: pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active) Feb 23 16:32:33 localhost kernel: pnp: PnP ACPI: found 5 devices Feb 23 16:32:33 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Feb 23 16:32:33 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x20043fffffff window] Feb 23 16:32:33 localhost kernel: NET: Registered protocol family 2 Feb 23 16:32:33 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 6, 393216 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: TCP bind hash table entries: 65536 (order: 9, 2621440 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Feb 23 16:32:33 localhost kernel: MPTCP token hash table entries: 16384 (order: 7, 917504 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: UDP hash table entries: 8192 (order: 7, 786432 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: UDP-Lite hash table entries: 8192 (order: 7, 786432 bytes, vmalloc) Feb 23 16:32:33 localhost kernel: NET: Registered protocol family 1 Feb 23 16:32:33 localhost kernel: NET: Registered protocol family 44 Feb 23 16:32:33 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 23 16:32:33 localhost kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 23 16:32:33 localhost kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 23 16:32:33 localhost kernel: PCI: CLS 0 bytes, default 64 Feb 23 16:32:33 localhost kernel: Unpacking initramfs... Feb 23 16:32:33 localhost kernel: Freeing initrd memory: 85976K Feb 23 16:32:33 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 23 16:32:33 localhost kernel: software IO TLB: mapped [mem 0x00000000bbfe9000-0x00000000bffe9000] (64MB) Feb 23 16:32:33 localhost kernel: ACPI: bus type thunderbolt registered Feb 23 16:32:33 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x29cd4133323, max_idle_ns: 440795296220 ns Feb 23 16:32:33 localhost kernel: clocksource: Switched to clocksource tsc Feb 23 16:32:33 localhost kernel: Initialise system trusted keyrings Feb 23 16:32:33 localhost kernel: Key type blacklist registered Feb 23 16:32:33 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Feb 23 16:32:33 localhost kernel: zbud: loaded Feb 23 16:32:33 localhost kernel: pstore: using deflate compression Feb 23 16:32:33 localhost kernel: Platform Keyring initialized Feb 23 16:32:33 localhost kernel: NET: Registered protocol family 38 Feb 23 16:32:33 localhost kernel: Key type asymmetric registered Feb 23 16:32:33 localhost kernel: Asymmetric key parser 'x509' registered Feb 23 16:32:33 localhost kernel: Running certificate verification selftests Feb 23 16:32:33 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Feb 23 16:32:33 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247) Feb 23 16:32:33 localhost kernel: io scheduler mq-deadline registered Feb 23 16:32:33 localhost kernel: io scheduler kyber registered Feb 23 16:32:33 localhost kernel: io scheduler bfq registered Feb 23 16:32:33 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Feb 23 16:32:33 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Feb 23 16:32:33 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Feb 23 16:32:33 localhost kernel: ACPI: Power Button [PWRF] Feb 23 16:32:33 localhost kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1 Feb 23 16:32:33 localhost kernel: ACPI: Sleep Button [SLPF] Feb 23 16:32:33 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 23 16:32:33 localhost kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 23 16:32:33 localhost kernel: Non-volatile memory driver v1.3 Feb 23 16:32:33 localhost kernel: rdac: device handler registered Feb 23 16:32:33 localhost kernel: hp_sw: device handler registered Feb 23 16:32:33 localhost kernel: emc: device handler registered Feb 23 16:32:33 localhost kernel: alua: device handler registered Feb 23 16:32:33 localhost kernel: libphy: Fixed MDIO Bus: probed Feb 23 16:32:33 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 23 16:32:33 localhost kernel: ehci-pci: EHCI PCI platform driver Feb 23 16:32:33 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Feb 23 16:32:33 localhost kernel: ohci-pci: OHCI PCI platform driver Feb 23 16:32:33 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 23 16:32:33 localhost kernel: usbcore: registered new interface driver usbserial_generic Feb 23 16:32:33 localhost kernel: usbserial: USB Serial support registered for generic Feb 23 16:32:33 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 23 16:32:33 localhost kernel: i8042: Warning: Keylock active Feb 23 16:32:33 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 23 16:32:33 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 23 16:32:33 localhost kernel: mousedev: PS/2 mouse device common for all mice Feb 23 16:32:33 localhost kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 23 16:32:33 localhost kernel: rtc_cmos 00:00: registered as rtc0 Feb 23 16:32:33 localhost kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 23 16:32:33 localhost kernel: intel_pstate: Intel P-state driver initializing Feb 23 16:32:33 localhost kernel: unchecked MSR access error: WRMSR to 0x199 (tried to write 0x0000000000000800) at rIP: 0xffffffff8746a604 (native_write_msr+0x4/0x30) Feb 23 16:32:33 localhost kernel: Call Trace: Feb 23 16:32:33 localhost kernel: Feb 23 16:32:33 localhost kernel: __wrmsr_on_cpu+0x33/0x40 Feb 23 16:32:33 localhost kernel: flush_smp_call_function_queue+0x3c/0xe0 Feb 23 16:32:33 localhost kernel: smp_call_function_single_interrupt+0x4b/0x180 Feb 23 16:32:33 localhost kernel: call_function_single_interrupt+0xf/0x20 Feb 23 16:32:33 localhost kernel: Feb 23 16:32:33 localhost kernel: RIP: 0010:native_safe_halt+0x13/0x20 Feb 23 16:32:33 localhost kernel: Code: 00 00 04 00 0f 84 5e ff ff ff eb ae 90 90 90 90 90 90 90 90 90 90 90 8b 05 76 94 82 01 85 c0 7e 07 0f 00 2d ef e5 42 00 fb f4 d8 5a 22 00 0f 1f 84 00 00 00 00 00 8b 05 56 94 82 01 85 c0 7e Feb 23 16:32:33 localhost kernel: RSP: 0000:ffffffff88a03e20 EFLAGS: 00000242 ORIG_RAX: ffffffffffffff04 Feb 23 16:32:33 localhost kernel: RAX: 0000000000000001 RBX: 0000000000000001 RCX: 000000003b11c67a Feb 23 16:32:33 localhost kernel: RDX: 0000000000000000 RSI: ffff9dbdfe0d3000 RDI: ffff9dbdfe0d3064 Feb 23 16:32:33 localhost kernel: RBP: ffffffff88ef5560 R08: 0000000000000001 R09: 0000000000000aec Feb 23 16:32:33 localhost kernel: R10: 0000000000001023 R11: ffff9dc09f028bc4 R12: ffff9dbdfe0d3064 Feb 23 16:32:33 localhost kernel: R13: ffff9dbd825e9800 R14: 0000000000000001 R15: 0000000000000000 Feb 23 16:32:33 localhost kernel: acpi_idle_do_entry+0x55/0x70 Feb 23 16:32:33 localhost kernel: acpi_idle_enter+0xa7/0xf0 Feb 23 16:32:33 localhost kernel: cpuidle_enter_state+0x8c/0x470 Feb 23 16:32:33 localhost kernel: cpuidle_enter+0x2c/0x40 Feb 23 16:32:33 localhost kernel: do_idle+0x2be/0x320 Feb 23 16:32:33 localhost kernel: cpu_startup_entry+0x46/0x50 Feb 23 16:32:33 localhost kernel: start_kernel+0x50c/0x530 Feb 23 16:32:33 localhost kernel: secondary_startup_64_no_verify+0xc2/0xcb Feb 23 16:32:33 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Feb 23 16:32:33 localhost kernel: usbcore: registered new interface driver usbhid Feb 23 16:32:33 localhost kernel: usbhid: USB HID core driver Feb 23 16:32:33 localhost kernel: drop_monitor: Initializing network drop monitor service Feb 23 16:32:33 localhost kernel: Initializing XFRM netlink socket Feb 23 16:32:33 localhost kernel: NET: Registered protocol family 10 Feb 23 16:32:33 localhost kernel: Segment Routing with IPv6 Feb 23 16:32:33 localhost kernel: NET: Registered protocol family 17 Feb 23 16:32:33 localhost kernel: mpls_gso: MPLS GSO support Feb 23 16:32:33 localhost kernel: AVX2 version of gcm_enc/dec engaged. Feb 23 16:32:33 localhost kernel: AES CTR mode by8 optimization enabled Feb 23 16:32:33 localhost kernel: sched_clock: Marking stable (1116922370, 0)->(1714257067, -597334697) Feb 23 16:32:33 localhost kernel: printk: console [ttyS0]: printing thread started Feb 23 16:32:33 localhost kernel: printk: console [tty0]: printing thread started Feb 23 16:32:33 localhost kernel: registered taskstats version 1 Feb 23 16:32:33 localhost kernel: Loading compiled-in X.509 certificates Feb 23 16:32:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: bb91ccb6376b94f885be82b93caeec6a7d9d1c37' Feb 23 16:32:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Feb 23 16:32:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Feb 23 16:32:33 localhost kernel: zswap: loaded using pool lzo/zbud Feb 23 16:32:33 localhost kernel: page_owner is disabled Feb 23 16:32:33 localhost kernel: Key type big_key registered Feb 23 16:32:33 localhost kernel: Key type encrypted registered Feb 23 16:32:33 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Feb 23 16:32:33 localhost kernel: ima: Allocated hash algorithm: sha256 Feb 23 16:32:33 localhost kernel: ima: No architecture policies found Feb 23 16:32:33 localhost kernel: evm: Initialising EVM extended attributes: Feb 23 16:32:33 localhost kernel: evm: security.selinux Feb 23 16:32:33 localhost kernel: evm: security.ima Feb 23 16:32:33 localhost kernel: evm: security.capability Feb 23 16:32:33 localhost kernel: evm: HMAC attrs: 0x1 Feb 23 16:32:33 localhost kernel: rtc_cmos 00:00: setting system clock to 2023-02-23 16:32:33 UTC (1677169953) Feb 23 16:32:33 localhost kernel: Freeing unused decrypted memory: 2036K Feb 23 16:32:33 localhost kernel: Freeing unused kernel image (initmem) memory: 2444K Feb 23 16:32:33 localhost kernel: Write protecting the kernel read-only data: 22528k Feb 23 16:32:33 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2016K Feb 23 16:32:33 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 56K Feb 23 16:32:33 localhost systemd-journald[299]: Missed 9 kernel messages Feb 23 16:32:33 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input2 Feb 23 16:32:33 localhost systemd-journald[299]: Missed 1 kernel messages Feb 23 16:32:33 localhost kernel: fuse: init (API version 7.33) Feb 23 16:32:33 localhost kernel: IPMI message handler: version 39.2 Feb 23 16:32:33 localhost kernel: ipmi device interface Feb 23 16:32:33 localhost systemd-journald[299]: Journal started Feb 23 16:32:33 localhost systemd-journald[299]: Runtime journal (/run/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 787.4M, 779.4M free. Feb 23 16:32:33 localhost systemd-modules-load[301]: Inserted module 'fuse' Feb 23 16:32:33 localhost systemd-modules-load[301]: Module 'msr' is builtin Feb 23 16:32:33 localhost systemd-modules-load[301]: Inserted module 'ipmi_devintf' Feb 23 16:32:34 localhost systemd[1]: Started Apply Kernel Variables. Feb 23 16:32:34 localhost systemd[1]: Started Create Static Device Nodes in /dev. Feb 23 16:32:34 localhost systemd[1]: systemd-vconsole-setup.service: Succeeded. Feb 23 16:32:34 localhost systemd[1]: Started Setup Virtual Console. Feb 23 16:32:34 localhost systemd[1]: Starting dracut ask for additional cmdline parameters... Feb 23 16:32:34 localhost systemd[1]: Started dracut ask for additional cmdline parameters. Feb 23 16:32:34 localhost systemd[1]: Starting dracut cmdline hook... Feb 23 16:32:34 localhost dracut-cmdline[328]: dracut-412.86.202302170236-0 dracut-049-203.git20220511.el8_6 Feb 23 16:32:34 localhost dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=dm_multipath BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-10ea79ead463d85f33b5d8ccf85d20209afb59c92ac52a79bea2babd3816e310/vmlinuz-4.18.0-372.43.1.rt7.200.el8_6.x86_64 ostree=/ostree/boot.0/rhcos/10ea79ead463d85f33b5d8ccf85d20209afb59c92ac52a79bea2babd3816e310/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f Feb 23 16:32:34 localhost systemd[1]: Started dracut cmdline hook. Feb 23 16:32:34 localhost systemd[1]: Starting dracut pre-udev hook... Feb 23 16:32:34 localhost systemd-journald[299]: Missed 14 kernel messages Feb 23 16:32:34 localhost kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 23 16:32:34 localhost kernel: device-mapper: uevent: version 1.0.3 Feb 23 16:32:34 localhost kernel: device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised: dm-devel@redhat.com Feb 23 16:32:34 localhost systemd[1]: Started dracut pre-udev hook. Feb 23 16:32:34 localhost systemd[1]: Starting udev Kernel Device Manager... Feb 23 16:32:34 localhost systemd[1]: Started udev Kernel Device Manager. Feb 23 16:32:34 localhost systemd[1]: Starting dracut pre-trigger hook... Feb 23 16:32:34 localhost dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Feb 23 16:32:34 localhost systemd[1]: Started dracut pre-trigger hook. Feb 23 16:32:34 localhost systemd[1]: Starting udev Coldplug all Devices... Feb 23 16:32:34 localhost systemd[1]: Mounting Kernel Configuration File System... Feb 23 16:32:34 localhost systemd[1]: Mounted Kernel Configuration File System. Feb 23 16:32:34 localhost systemd[1]: Started udev Coldplug all Devices. Feb 23 16:32:34 localhost systemd[1]: Starting udev Wait for Complete Device Initialization... Feb 23 16:32:35 localhost systemd-journald[299]: Missed 11 kernel messages Feb 23 16:32:35 localhost kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 23 16:32:35 localhost kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 23 16:32:35 localhost kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 02:ea:92:f9:d3:f3 Feb 23 16:32:35 localhost kernel: nvme nvme0: pci function 0000:00:04.0 Feb 23 16:32:35 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 23 16:32:35 localhost systemd-udevd[498]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:35 localhost systemd-journald[299]: Missed 1 kernel messages Feb 23 16:32:35 localhost kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 23 16:32:35 localhost kernel: nvme0n1: detected capacity change from 0 to 128849018880 Feb 23 16:32:35 localhost kernel: nvme0n1: p1 p2 p3 p4 Feb 23 16:32:35 localhost systemd-udevd[513]: Using default interface naming scheme 'rhel-8.0'. Feb 23 16:32:35 localhost systemd-udevd[513]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:35 localhost systemd-journald[299]: Missed 2 kernel messages Feb 23 16:32:35 localhost kernel: ena 0000:00:05.0 ens5: renamed from eth0 Feb 23 16:32:35 localhost systemd-udevd[503]: Using default interface naming scheme 'rhel-8.0'. Feb 23 16:32:35 localhost systemd[1]: Found device Amazon Elastic Block Store root. Feb 23 16:32:35 localhost systemd[1]: Found device Amazon Elastic Block Store root. Feb 23 16:32:35 localhost systemd[1]: Reached target Initrd Root Device. Feb 23 16:32:35 localhost systemd[1]: Started udev Wait for Complete Device Initialization. Feb 23 16:32:35 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 23 16:32:35 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 23 16:32:35 localhost systemd[1]: Reached target Local File Systems (Pre). Feb 23 16:32:35 localhost systemd[1]: Reached target Local File Systems. Feb 23 16:32:35 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 16:32:35 localhost multipathd[539]: --------start up-------- Feb 23 16:32:35 localhost multipathd[539]: read /etc/multipath.conf Feb 23 16:32:35 localhost multipathd[539]: /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 16:32:35 localhost multipathd[539]: You can run "/sbin/mpathconf --enable" to create Feb 23 16:32:35 localhost multipathd[539]: /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 16:32:35 localhost multipathd[539]: path checkers start up Feb 23 16:32:35 localhost multipathd[539]: /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 16:32:35 localhost multipathd[539]: You can run "/sbin/mpathconf --enable" to create Feb 23 16:32:35 localhost multipathd[539]: /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 16:32:35 localhost systemd[1]: Started Create Volatile Files and Directories. Feb 23 16:32:35 localhost systemd[1]: Reached target System Initialization. Feb 23 16:32:35 localhost systemd[1]: Reached target Basic System. Feb 23 16:32:35 localhost systemd[1]: Starting dracut initqueue hook... Feb 23 16:32:35 localhost systemd[1]: Started dracut initqueue hook. Feb 23 16:32:35 localhost systemd[1]: Reached target Remote File Systems (Pre). Feb 23 16:32:35 localhost systemd[1]: Reached target Remote File Systems. Feb 23 16:32:35 localhost systemd[1]: Starting dracut pre-mount hook... Feb 23 16:32:35 localhost systemd[1]: Started dracut pre-mount hook. Feb 23 16:32:35 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/c83680a9-dcc4-4413-a0a5-4681b35c650a... Feb 23 16:32:35 localhost systemd-fsck[565]: /usr/sbin/fsck.xfs: XFS file system. Feb 23 16:32:35 localhost systemd[1]: Started File System Check on /dev/disk/by-uuid/c83680a9-dcc4-4413-a0a5-4681b35c650a. Feb 23 16:32:35 localhost systemd[1]: Mounting /sysroot... Feb 23 16:32:35 localhost systemd-journald[299]: Missed 32 kernel messages Feb 23 16:32:35 localhost kernel: SGI XFS with ACLs, security attributes, quota, no debug enabled Feb 23 16:32:35 localhost kernel: XFS (nvme0n1p4): Mounting V5 Filesystem Feb 23 16:32:36 localhost kernel: XFS (nvme0n1p4): Ending clean mount Feb 23 16:32:36 localhost systemd[1]: Mounted /sysroot. Feb 23 16:32:36 localhost systemd[1]: Starting OSTree Prepare OS/... Feb 23 16:32:36 localhost ostree-prepare-root[582]: preparing sysroot at /sysroot Feb 23 16:32:36 localhost ostree-prepare-root[582]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea.0 Feb 23 16:32:36 localhost ostree-prepare-root[582]: filesystem at /sysroot currently writable: 1 Feb 23 16:32:36 localhost ostree-prepare-root[582]: sysroot.readonly configuration value: 1 Feb 23 16:32:36 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea.0-etc.mount: Succeeded. Feb 23 16:32:36 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea.0.mount: Succeeded. Feb 23 16:32:36 localhost systemd[1]: Started OSTree Prepare OS/. Feb 23 16:32:36 localhost systemd[1]: Reached target Initrd Root File System. Feb 23 16:32:36 localhost systemd[1]: Starting Reload Configuration from the Real Root... Feb 23 16:32:36 localhost systemd[1]: Reloading. Feb 23 16:32:37 localhost multipathd[539]: exit (signal) Feb 23 16:32:37 localhost multipathd[539]: --------shut down------- Feb 23 16:32:37 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 23 16:32:37 localhost systemd[1]: initrd-parse-etc.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Started Reload Configuration from the Real Root. Feb 23 16:32:37 localhost systemd[1]: multipathd.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 23 16:32:37 localhost systemd[1]: Reached target Initrd File Systems. Feb 23 16:32:37 localhost systemd[1]: Reached target Initrd Default Target. Feb 23 16:32:37 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 23 16:32:37 localhost dracut-pre-pivot[687]: Feb 23 16:32:37 | /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 16:32:37 localhost dracut-pre-pivot[687]: Feb 23 16:32:37 | You can run "/sbin/mpathconf --enable" to create Feb 23 16:32:37 localhost dracut-pre-pivot[687]: Feb 23 16:32:37 | /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 16:32:37 localhost systemd[1]: Started dracut pre-pivot and cleanup hook. Feb 23 16:32:37 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 23 16:32:37 localhost systemd[1]: Stopped target Timers. Feb 23 16:32:37 localhost systemd[1]: clevis-luks-askpass.path: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Feb 23 16:32:37 localhost systemd[1]: dracut-pre-pivot.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 23 16:32:37 localhost systemd[1]: Stopped target Remote File Systems. Feb 23 16:32:37 localhost systemd[1]: Stopped target Initrd Default Target. Feb 23 16:32:37 localhost systemd[1]: coreos-touch-run-agetty.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Feb 23 16:32:37 localhost systemd[1]: Stopped target Subsequent (Not Ignition) boot complete. Feb 23 16:32:37 localhost systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup. Feb 23 16:32:37 localhost systemd[1]: Stopped target Initrd Root Device. Feb 23 16:32:37 localhost systemd[1]: Stopped target Basic System. Feb 23 16:32:37 localhost systemd[1]: Stopped target Sockets. Feb 23 16:32:37 localhost systemd[1]: Stopped target Paths. Feb 23 16:32:37 localhost systemd[1]: Stopped target Slices. Feb 23 16:32:37 localhost systemd[1]: dracut-pre-mount.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 23 16:32:37 localhost systemd[1]: Stopped target System Initialization. Feb 23 16:32:37 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 16:32:37 localhost systemd[1]: systemd-ask-password-console.path: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 16:32:37 localhost systemd[1]: systemd-udev-settle.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped udev Wait for Complete Device Initialization. Feb 23 16:32:37 localhost systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 16:32:37 localhost systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 16:32:37 localhost systemd[1]: Stopped target Swap. Feb 23 16:32:37 localhost systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 16:32:37 localhost systemd[1]: Stopped target Local File Systems. Feb 23 16:32:37 localhost systemd[1]: Stopped target Local File Systems (Pre). Feb 23 16:32:37 localhost systemd[1]: Stopped target Remote File Systems (Pre). Feb 23 16:32:37 localhost systemd[1]: dracut-initqueue.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped dracut initqueue hook. Feb 23 16:32:37 localhost systemd[1]: systemd-udev-trigger.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped udev Coldplug all Devices. Feb 23 16:32:37 localhost systemd[1]: dracut-pre-trigger.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 23 16:32:37 localhost systemd[1]: Stopping udev Kernel Device Manager... Feb 23 16:32:37 localhost systemd[1]: initrd-cleanup.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Started Cleaning Up and Shutting Down Daemons. Feb 23 16:32:37 localhost systemd[1]: systemd-udevd.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped udev Kernel Device Manager. Feb 23 16:32:37 localhost systemd[1]: dracut-pre-udev.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 23 16:32:37 localhost systemd[1]: dracut-cmdline.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped dracut cmdline hook. Feb 23 16:32:37 localhost systemd[1]: dracut-cmdline-ask.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped dracut ask for additional cmdline parameters. Feb 23 16:32:37 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 16:32:37 localhost systemd[1]: kmod-static-nodes.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Stopped Create list of required static device nodes for the current kernel. Feb 23 16:32:37 localhost systemd[1]: systemd-udevd-control.socket: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Closed udev Control Socket. Feb 23 16:32:37 localhost systemd[1]: systemd-udevd-kernel.socket: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Closed udev Kernel Socket. Feb 23 16:32:37 localhost systemd[1]: Starting Cleanup udevd DB... Feb 23 16:32:37 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded. Feb 23 16:32:37 localhost systemd[1]: Started Cleanup udevd DB. Feb 23 16:32:37 localhost systemd[1]: Reached target Switch Root. Feb 23 16:32:37 localhost systemd[1]: Starting Switch Root... Feb 23 16:32:37 localhost systemd[1]: Switching root. Feb 23 16:32:37 localhost systemd-journald[299]: Journal stopped Feb 23 16:32:38 localhost systemd[1]: Mounted /sysroot. Feb 23 16:32:38 localhost systemd[1]: Starting OSTree Prepare OS/... Feb 23 16:32:38 localhost ostree-prepare-root[582]: preparing sysroot at /sysroot Feb 23 16:32:38 localhost ostree-prepare-root[582]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea.0 Feb 23 16:32:38 localhost ostree-prepare-root[582]: filesystem at /sysroot currently writable: 1 Feb 23 16:32:38 localhost ostree-prepare-root[582]: sysroot.readonly configuration value: 1 Feb 23 16:32:38 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea.0-etc.mount: Succeeded. Feb 23 16:32:38 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-6e36750a3dfe11507ec8e0553290aae6c3652e7ed9983ae738ef6a78206752ea.0.mount: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Started OSTree Prepare OS/. Feb 23 16:32:38 localhost systemd[1]: Reached target Initrd Root File System. Feb 23 16:32:38 localhost systemd[1]: Starting Reload Configuration from the Real Root... Feb 23 16:32:38 localhost systemd[1]: Reloading. Feb 23 16:32:38 localhost multipathd[539]: exit (signal) Feb 23 16:32:38 localhost multipathd[539]: --------shut down------- Feb 23 16:32:38 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 23 16:32:38 localhost systemd[1]: initrd-parse-etc.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Started Reload Configuration from the Real Root. Feb 23 16:32:38 localhost systemd[1]: multipathd.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 23 16:32:38 localhost systemd[1]: Reached target Initrd File Systems. Feb 23 16:32:38 localhost systemd[1]: Reached target Initrd Default Target. Feb 23 16:32:38 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 23 16:32:38 localhost dracut-pre-pivot[687]: Feb 23 16:32:37 | /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 16:32:38 localhost dracut-pre-pivot[687]: Feb 23 16:32:37 | You can run "/sbin/mpathconf --enable" to create Feb 23 16:32:38 localhost dracut-pre-pivot[687]: Feb 23 16:32:37 | /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 16:32:38 localhost systemd[1]: Started dracut pre-pivot and cleanup hook. Feb 23 16:32:38 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 23 16:32:38 localhost systemd[1]: Stopped target Timers. Feb 23 16:32:38 localhost systemd[1]: clevis-luks-askpass.path: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Feb 23 16:32:38 localhost systemd[1]: dracut-pre-pivot.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 23 16:32:38 localhost systemd[1]: Stopped target Remote File Systems. Feb 23 16:32:38 localhost systemd[1]: Stopped target Initrd Default Target. Feb 23 16:32:38 localhost systemd[1]: coreos-touch-run-agetty.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Feb 23 16:32:38 localhost systemd[1]: Stopped target Subsequent (Not Ignition) boot complete. Feb 23 16:32:38 localhost systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup. Feb 23 16:32:38 localhost systemd[1]: Stopped target Initrd Root Device. Feb 23 16:32:38 localhost systemd[1]: Stopped target Basic System. Feb 23 16:32:38 localhost systemd[1]: Stopped target Sockets. Feb 23 16:32:38 localhost systemd[1]: Stopped target Paths. Feb 23 16:32:38 localhost systemd[1]: Stopped target Slices. Feb 23 16:32:38 localhost systemd[1]: dracut-pre-mount.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 23 16:32:38 localhost systemd[1]: Stopped target System Initialization. Feb 23 16:32:38 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 16:32:38 localhost systemd[1]: systemd-ask-password-console.path: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 16:32:38 localhost systemd[1]: systemd-udev-settle.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped udev Wait for Complete Device Initialization. Feb 23 16:32:38 localhost systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 16:32:38 localhost systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 16:32:38 localhost systemd[1]: Stopped target Swap. Feb 23 16:32:38 localhost systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 16:32:38 localhost systemd[1]: Stopped target Local File Systems. Feb 23 16:32:38 localhost systemd[1]: Stopped target Local File Systems (Pre). Feb 23 16:32:38 localhost systemd[1]: Stopped target Remote File Systems (Pre). Feb 23 16:32:38 localhost systemd[1]: dracut-initqueue.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped dracut initqueue hook. Feb 23 16:32:38 localhost systemd[1]: systemd-udev-trigger.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped udev Coldplug all Devices. Feb 23 16:32:38 localhost systemd[1]: dracut-pre-trigger.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 23 16:32:38 localhost systemd[1]: Stopping udev Kernel Device Manager... Feb 23 16:32:38 localhost systemd[1]: initrd-cleanup.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Started Cleaning Up and Shutting Down Daemons. Feb 23 16:32:38 localhost systemd[1]: systemd-udevd.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped udev Kernel Device Manager. Feb 23 16:32:38 localhost systemd[1]: dracut-pre-udev.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 23 16:32:38 localhost systemd[1]: dracut-cmdline.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped dracut cmdline hook. Feb 23 16:32:38 localhost systemd[1]: dracut-cmdline-ask.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped dracut ask for additional cmdline parameters. Feb 23 16:32:38 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 16:32:38 localhost systemd[1]: kmod-static-nodes.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Create list of required static device nodes for the current kernel. Feb 23 16:32:38 localhost systemd[1]: systemd-udevd-control.socket: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Closed udev Control Socket. Feb 23 16:32:38 localhost systemd[1]: systemd-udevd-kernel.socket: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Closed udev Kernel Socket. Feb 23 16:32:38 localhost systemd[1]: Starting Cleanup udevd DB... Feb 23 16:32:38 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Started Cleanup udevd DB. Feb 23 16:32:38 localhost systemd[1]: Reached target Switch Root. Feb 23 16:32:38 localhost systemd[1]: Starting Switch Root... Feb 23 16:32:38 localhost systemd[1]: Switching root. Feb 23 16:32:38 localhost kernel: printk: systemd: 22 output lines suppressed due to ratelimiting Feb 23 16:32:38 localhost kernel: audit: type=1404 audit(1677169957.937:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Feb 23 16:32:38 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 16:32:38 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 16:32:38 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 16:32:38 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 16:32:38 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 16:32:38 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 16:32:38 localhost kernel: audit: type=1403 audit(1677169958.114:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 23 16:32:38 localhost systemd[1]: Successfully loaded SELinux policy in 180.709ms. Feb 23 16:32:38 localhost systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 16.851ms. Feb 23 16:32:38 localhost systemd[1]: systemd 239 (239-58.el8_6.9) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy) Feb 23 16:32:38 localhost systemd[1]: Detected virtualization kvm. Feb 23 16:32:38 localhost systemd[1]: Detected architecture x86-64. Feb 23 16:32:38 localhost coreos-platform-chrony: Updated chrony to use aws configuration /run/coreos-platform-chrony.conf Feb 23 16:32:38 localhost systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 16:32:38 localhost systemd[1]: systemd-journald.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: systemd-journald.service: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: initrd-switch-root.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped Switch Root. Feb 23 16:32:38 localhost systemd[1]: initrd-switch-root.service: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: systemd-journald.service: Service has no hold-off time (RestartSec=0), scheduling restart. Feb 23 16:32:38 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 23 16:32:38 localhost systemd[1]: Stopped Journal Service. Feb 23 16:32:38 localhost systemd[1]: systemd-journald.service: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: Starting Journal Service... Feb 23 16:32:38 localhost systemd[1]: Listening on LVM2 poll daemon socket. Feb 23 16:32:38 localhost systemd[1]: Created slice system-sshd\x2dkeygen.slice. Feb 23 16:32:38 localhost systemd[1]: Created slice system-getty.slice. Feb 23 16:32:38 localhost systemd[1]: Listening on udev Kernel Socket. Feb 23 16:32:38 localhost systemd-journald[755]: Journal started Feb 23 16:32:38 localhost systemd-journald[755]: Runtime journal (/run/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 787.4M, 779.4M free. Feb 23 16:32:38 localhost systemd[1]: Listening on udev Control Socket. Feb 23 16:32:38 localhost systemd[1]: Starting Create list of required static device nodes for the current kernel... Feb 23 16:32:38 localhost systemd[1]: Stopped target Switch Root. Feb 23 16:32:38 localhost systemd[1]: Stopped target Initrd File Systems. Feb 23 16:32:38 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Feb 23 16:32:38 localhost systemd[1]: systemd-fsck-root.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped File System Check on Root Device. Feb 23 16:32:38 localhost systemd[1]: systemd-fsck-root.service: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: Reached target Synchronize afterburn-sshkeys@.service template instances. Feb 23 16:32:38 localhost systemd[1]: Reached target Swap. Feb 23 16:32:38 localhost systemd[1]: Mounting Temporary Directory (/tmp)... Feb 23 16:32:38 localhost systemd[1]: Created slice User and Session Slice. Feb 23 16:32:38 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 23 16:32:38 localhost systemd[1]: Started Forward Password Requests to Clevis Directory Watch. Feb 23 16:32:38 localhost systemd[1]: Starting udev Coldplug all Devices... Feb 23 16:32:38 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Feb 23 16:32:38 localhost systemd[1]: Mounting POSIX Message Queue File System... Feb 23 16:32:38 localhost systemd[1]: Stopped target Initrd Root File System. Feb 23 16:32:38 localhost systemd[1]: Reached target Remote File Systems. Feb 23 16:32:38 localhost systemd[1]: Mounting Huge Pages File System... Feb 23 16:32:38 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Feb 23 16:32:38 localhost systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 23 16:32:38 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Feb 23 16:32:38 localhost systemd[1]: Starting CoreOS: Set printk To Level 4 (warn)... Feb 23 16:32:38 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 23 16:32:38 localhost systemd[1]: Mounting Kernel Debug File System... Feb 23 16:32:38 localhost systemd[1]: Starting Rebuild Hardware Database... Feb 23 16:32:38 localhost systemd[1]: Starting Create System Users... Feb 23 16:32:38 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 23 16:32:38 localhost systemd[1]: Reached target Slices. Feb 23 16:32:38 localhost systemd[1]: Reached target RPC Port Mapper. Feb 23 16:32:38 localhost systemd[1]: ostree-prepare-root.service: Succeeded. Feb 23 16:32:38 localhost systemd[1]: Stopped OSTree Prepare OS/. Feb 23 16:32:38 localhost systemd[1]: ostree-prepare-root.service: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Feb 23 16:32:38 localhost systemd[1]: Reached target Host and Network Name Lookups. Feb 23 16:32:38 localhost systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 23 16:32:38 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 16:32:38 localhost systemd[1]: Reached target Local Encrypted Volumes (Pre). Feb 23 16:32:38 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 23 16:32:38 localhost systemd[1]: Reached target Remote Encrypted Volumes. Feb 23 16:32:38 localhost systemd-modules-load[777]: Module 'msr' is builtin Feb 23 16:32:38 localhost systemd[1]: Started Create list of required static device nodes for the current kernel. Feb 23 16:32:38 localhost systemd-modules-load[777]: Inserted module 'ip_tables' Feb 23 16:32:38 localhost systemd[1]: Mounted Temporary Directory (/tmp). Feb 23 16:32:38 localhost systemd[1]: sysroot-sysroot.mount: Succeeded. Feb 23 16:32:38 localhost systemd[1]: sysroot-sysroot.mount: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: sysroot-sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Feb 23 16:32:38 localhost systemd[1]: sysroot-sysroot-ostree-deploy-rhcos-var.mount: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: sysroot-usr.mount: Succeeded. Feb 23 16:32:38 localhost systemd[1]: sysroot-usr.mount: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: sysroot-etc.mount: Succeeded. Feb 23 16:32:38 localhost systemd[1]: sysroot-etc.mount: Consumed 0 CPU time Feb 23 16:32:38 localhost systemd[1]: Started Journal Service. Feb 23 16:32:38 localhost systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 23 16:32:38 localhost systemd[1]: Mounted POSIX Message Queue File System. Feb 23 16:32:38 localhost systemd[1]: Mounted Huge Pages File System. Feb 23 16:32:38 localhost systemd[1]: Started CoreOS: Set printk To Level 4 (warn). Feb 23 16:32:38 localhost systemd[1]: Mounted Kernel Debug File System. Feb 23 16:32:39 localhost systemd[1]: Started Create System Users. Feb 23 16:32:39 localhost systemd[1]: Started Load Kernel Modules. Feb 23 16:32:39 localhost systemd[1]: Mounting FUSE Control File System... Feb 23 16:32:39 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 16:32:39 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 23 16:32:39 localhost systemd[1]: Started udev Coldplug all Devices. Feb 23 16:32:39 localhost systemd[1]: Mounted FUSE Control File System. Feb 23 16:32:39 localhost systemd[1]: Started Apply Kernel Variables. Feb 23 16:32:39 localhost systemd[1]: Starting udev Wait for Complete Device Initialization... Feb 23 16:32:39 localhost systemd[1]: Started Rebuild Hardware Database. Feb 23 16:32:39 localhost systemd[1]: Started Create Static Device Nodes in /dev. Feb 23 16:32:39 localhost systemd[1]: Starting udev Kernel Device Manager... Feb 23 16:32:39 localhost systemd[1]: Started udev Kernel Device Manager. Feb 23 16:32:39 localhost systemd-udevd[794]: Using default interface naming scheme 'rhel-8.0'. Feb 23 16:32:39 localhost systemd-udevd[794]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:39 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input5 Feb 23 16:32:39 localhost kernel: parport_pc 00:03: reported by Plug and Play ACPI Feb 23 16:32:39 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 23 16:32:39 localhost systemd-udevd[796]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:39 localhost kernel: ppdev: user-space parallel port driver Feb 23 16:32:39 localhost kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 23 16:32:39 localhost systemd[1]: Started udev Wait for Complete Device Initialization. Feb 23 16:32:39 localhost systemd[1]: Reached target Local File Systems (Pre). Feb 23 16:32:39 localhost systemd[1]: var.mount: Directory /var to mount over is not empty, mounting anyway. Feb 23 16:32:39 localhost systemd[1]: Mounting /var... Feb 23 16:32:39 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f... Feb 23 16:32:39 localhost systemd[1]: Mounted /var. Feb 23 16:32:39 localhost systemd[1]: Starting OSTree Remount OS/ Bind Mounts... Feb 23 16:32:39 localhost systemd[1]: Started OSTree Remount OS/ Bind Mounts. Feb 23 16:32:39 localhost systemd[1]: Starting Load/Save Random Seed... Feb 23 16:32:39 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Feb 23 16:32:39 localhost systemd[1]: Started Load/Save Random Seed. Feb 23 16:32:39 localhost systemd-journald[755]: Time spent on flushing to /var is 135.011ms for 887 entries. Feb 23 16:32:39 localhost systemd-journald[755]: System journal (/var/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 16.0M, max 4.0G, 3.9G free. Feb 23 16:32:39 localhost kernel: EXT4-fs (nvme0n1p3): mounted filesystem with ordered data mode. Opts: (null) Feb 23 16:32:39 localhost systemd[1]: Started File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f. Feb 23 16:32:39 localhost systemd-fsck[833]: boot: clean, 329/98304 files, 236811/393216 blocks Feb 23 16:32:39 localhost systemd[1]: Mounting CoreOS Dynamic Mount for /boot... Feb 23 16:32:39 localhost systemd[1]: Mounted CoreOS Dynamic Mount for /boot. Feb 23 16:32:39 localhost systemd[1]: Reached target Local File Systems. Feb 23 16:32:39 localhost systemd[1]: Starting Run update-ca-trust... Feb 23 16:32:39 localhost systemd[1]: Starting Rebuild Journal Catalog... Feb 23 16:32:39 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Feb 23 16:32:39 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Feb 23 16:32:39 localhost systemd[1]: Started Restore /run/initramfs on shutdown. Feb 23 16:32:39 localhost systemd[1]: Started Rebuild Journal Catalog. Feb 23 16:32:39 localhost systemd[1]: Started Flush Journal to Persistent Storage. Feb 23 16:32:39 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 16:32:39 localhost systemd-tmpfiles[854]: [/usr/lib/tmpfiles.d/pkg-dbus-daemon.conf:1] Duplicate line for path "/var/lib/dbus", ignoring. Feb 23 16:32:39 localhost systemd-tmpfiles[854]: [/usr/lib/tmpfiles.d/tmp.conf:12] Duplicate line for path "/var/tmp", ignoring. Feb 23 16:32:39 localhost systemd-tmpfiles[854]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Feb 23 16:32:39 localhost systemd-tmpfiles[854]: [/usr/lib/tmpfiles.d/var.conf:19] Duplicate line for path "/var/cache", ignoring. Feb 23 16:32:39 localhost systemd-tmpfiles[854]: [/usr/lib/tmpfiles.d/var.conf:21] Duplicate line for path "/var/lib", ignoring. Feb 23 16:32:39 localhost systemd-tmpfiles[854]: [/usr/lib/tmpfiles.d/var.conf:23] Duplicate line for path "/var/spool", ignoring. Feb 23 16:32:39 localhost systemd-tmpfiles[854]: "/home" already exists and is not a directory. Feb 23 16:32:39 localhost systemd-tmpfiles[854]: "/srv" already exists and is not a directory. Feb 23 16:32:39 localhost systemd[1]: Started Create Volatile Files and Directories. Feb 23 16:32:39 localhost systemd[1]: Starting RHEL CoreOS Rebuild SELinux Policy If Necessary... Feb 23 16:32:39 localhost rhcos-rebuild-selinux-policy[858]: RHEL_VERSION=8.6Checking for policy recompilation Feb 23 16:32:39 localhost systemd[1]: Starting Security Auditing Service... Feb 23 16:32:39 localhost systemd[1]: Starting RHCOS Fix SELinux Labeling For /usr/local/sbin... Feb 23 16:32:39 localhost chcon[862]: changing security context of '/usr/local/sbin' Feb 23 16:32:39 localhost rhcos-rebuild-selinux-policy[860]: -rw-r--r--. 1 root root 8914149 Feb 23 16:31 /etc/selinux/targeted/policy/policy.31 Feb 23 16:32:39 localhost rhcos-rebuild-selinux-policy[860]: -rw-r--r--. 3 root root 8914149 Jan 1 1970 /usr/etc/selinux/targeted/policy/policy.31 Feb 23 16:32:39 localhost systemd[1]: Started RHCOS Fix SELinux Labeling For /usr/local/sbin. Feb 23 16:32:39 localhost auditd[867]: No plugins found, not dispatching events Feb 23 16:32:39 localhost auditd[867]: Init complete, auditd 3.0.7 listening for events (startup state enable) Feb 23 16:32:39 localhost augenrules[870]: /sbin/augenrules: No change Feb 23 16:32:39 localhost augenrules[881]: No rules Feb 23 16:32:39 localhost augenrules[881]: enabled 1 Feb 23 16:32:39 localhost augenrules[881]: failure 1 Feb 23 16:32:39 localhost augenrules[881]: pid 867 Feb 23 16:32:39 localhost augenrules[881]: rate_limit 0 Feb 23 16:32:39 localhost augenrules[881]: backlog_limit 8192 Feb 23 16:32:39 localhost augenrules[881]: lost 0 Feb 23 16:32:39 localhost augenrules[881]: backlog 0 Feb 23 16:32:39 localhost augenrules[881]: backlog_wait_time 60000 Feb 23 16:32:39 localhost augenrules[881]: backlog_wait_time_actual 0 Feb 23 16:32:39 localhost augenrules[881]: enabled 1 Feb 23 16:32:39 localhost augenrules[881]: failure 1 Feb 23 16:32:39 localhost augenrules[881]: pid 867 Feb 23 16:32:39 localhost augenrules[881]: rate_limit 0 Feb 23 16:32:39 localhost augenrules[881]: backlog_limit 8192 Feb 23 16:32:39 localhost augenrules[881]: lost 0 Feb 23 16:32:39 localhost augenrules[881]: backlog 0 Feb 23 16:32:39 localhost augenrules[881]: backlog_wait_time 60000 Feb 23 16:32:39 localhost augenrules[881]: backlog_wait_time_actual 0 Feb 23 16:32:39 localhost augenrules[881]: enabled 1 Feb 23 16:32:39 localhost augenrules[881]: failure 1 Feb 23 16:32:39 localhost augenrules[881]: pid 867 Feb 23 16:32:39 localhost augenrules[881]: rate_limit 0 Feb 23 16:32:39 localhost augenrules[881]: backlog_limit 8192 Feb 23 16:32:39 localhost augenrules[881]: lost 0 Feb 23 16:32:39 localhost augenrules[881]: backlog 0 Feb 23 16:32:39 localhost augenrules[881]: backlog_wait_time 60000 Feb 23 16:32:39 localhost augenrules[881]: backlog_wait_time_actual 0 Feb 23 16:32:39 localhost systemd[1]: Started RHEL CoreOS Rebuild SELinux Policy If Necessary. Feb 23 16:32:39 localhost systemd[1]: Started Security Auditing Service. Feb 23 16:32:39 localhost systemd[1]: Starting Update UTMP about System Boot/Shutdown... Feb 23 16:32:39 localhost systemd[1]: Started Update UTMP about System Boot/Shutdown. Feb 23 16:32:40 localhost systemd[1]: Started Run update-ca-trust. Feb 23 16:32:40 localhost systemd[1]: Started Rebuild Dynamic Linker Cache. Feb 23 16:32:40 localhost systemd[1]: Starting Update is Completed... Feb 23 16:32:40 localhost systemd[1]: Started Update is Completed. Feb 23 16:32:40 localhost systemd[1]: Reached target System Initialization. Feb 23 16:32:40 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Feb 23 16:32:40 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Feb 23 16:32:40 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 23 16:32:40 localhost systemd[1]: Listening on bootupd.socket. Feb 23 16:32:40 localhost systemd[1]: Reached target Sockets. Feb 23 16:32:40 localhost systemd[1]: Started OSTree Monitor Staged Deployment. Feb 23 16:32:40 localhost systemd[1]: Started Daily rotation of log files. Feb 23 16:32:40 localhost systemd[1]: Reached target Timers. Feb 23 16:32:40 localhost systemd[1]: Started Monitor console-login-helper-messages runtime issue snippets directory for changes. Feb 23 16:32:40 localhost systemd[1]: Reached target Paths. Feb 23 16:32:40 localhost systemd[1]: Reached target Basic System. Feb 23 16:32:40 localhost systemd[1]: Starting CRI-O Auto Update Script... Feb 23 16:32:40 localhost systemd[1]: Starting Generation of shadow ID ranges for CRI-O... Feb 23 16:32:40 localhost systemd[1]: Starting NTP client/server... Feb 23 16:32:40 localhost systemd[1]: Started D-Bus System Message Bus. Feb 23 16:32:40 localhost systemd[1]: Reached target Network (Pre). Feb 23 16:32:40 localhost systemd[1]: Starting Open vSwitch Database Unit... Feb 23 16:32:40 localhost systemd[1]: Starting System Security Services Daemon... Feb 23 16:32:40 localhost systemd[1]: Reached target sshd-keygen.target. Feb 23 16:32:40 localhost systemd[1]: Starting Generate SSH keys snippet for display via console-login-helper-messages... Feb 23 16:32:40 localhost chronyd[912]: chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 23 16:32:40 localhost systemd[1]: Starting Create Ignition Status Issue Files... Feb 23 16:32:40 localhost systemd[1]: Started irqbalance daemon. Feb 23 16:32:40 localhost systemd[1]: Starting Generate console-login-helper-messages issue snippet... Feb 23 16:32:40 localhost chronyd[912]: Frequency 0.607 +/- 0.192 ppm read from /var/lib/chrony/drift Feb 23 16:32:40 localhost systemd[1]: Starting Afterburn (Metadata)... Feb 23 16:32:40 localhost chown[923]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Feb 23 16:32:40 localhost systemd[1]: Started NTP client/server. Feb 23 16:32:40 localhost systemd[1]: crio-subid.service: Succeeded. Feb 23 16:32:40 localhost systemd[1]: Started Generation of shadow ID ranges for CRI-O. Feb 23 16:32:40 localhost systemd[1]: crio-subid.service: Consumed 24ms CPU time Feb 23 16:32:40 localhost systemd[1]: Started Generate SSH keys snippet for display via console-login-helper-messages. Feb 23 16:32:40 localhost afterburn[921]: Feb 23 16:32:40.450 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 23 16:32:40 localhost sssd[908]: Starting up Feb 23 16:32:40 localhost sssd_be[976]: Starting up Feb 23 16:32:40 localhost systemd[1]: Started Create Ignition Status Issue Files. Feb 23 16:32:40 localhost sssd_nss[997]: Starting up Feb 23 16:32:40 localhost systemd[1]: Started System Security Services Daemon. Feb 23 16:32:40 localhost systemd[1]: Reached target User and Group Name Lookups. Feb 23 16:32:40 localhost systemd[1]: Starting Login Service... Feb 23 16:32:40 localhost systemd-logind[1014]: Watching system buttons on /dev/input/event0 (Power Button) Feb 23 16:32:40 localhost systemd-logind[1014]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 23 16:32:40 localhost systemd-logind[1014]: Watching system buttons on /dev/input/event2 (AT Translated Set 2 keyboard) Feb 23 16:32:40 localhost systemd-logind[1014]: New seat seat0. Feb 23 16:32:40 localhost systemd[1]: Started Login Service. Feb 23 16:32:40 localhost ovsdb-server[1025]: ovs|00002|stream_ssl|ERR|SSL_use_certificate_file: error:02001002:system library:fopen:No such file or directory Feb 23 16:32:40 localhost ovsdb-server[1025]: ovs|00003|stream_ssl|ERR|SSL_use_PrivateKey_file: error:20074002:BIO routines:file_ctrl:system lib Feb 23 16:32:40 localhost ovs-ctl[952]: Starting ovsdb-server. Feb 23 16:32:40 localhost ovs-vsctl[1026]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.3.0 Feb 23 16:32:40 localhost ovs-vsctl[1031]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.17.6 "external-ids:system-id=\"4004906b-6ca5-4a32-b3c0-bdcf1c128aba\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhcos\"" "system-version=\"4.12\"" Feb 23 16:32:40 localhost ovs-ctl[952]: Configuring Open vSwitch system IDs. Feb 23 16:32:40 localhost ovs-vsctl[1038]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=localhost Feb 23 16:32:40 localhost ovs-ctl[952]: Enabling remote OVSDB managers. Feb 23 16:32:40 localhost systemd[1]: Started Open vSwitch Database Unit. Feb 23 16:32:40 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Feb 23 16:32:40 localhost ovs-vsctl[1046]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port f879576786b0889 Feb 23 16:32:40 localhost ovs-vsctl[1047]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port 0c751590d84e3dc Feb 23 16:32:40 localhost ovs-vsctl[1048]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port ff0a102645f986a Feb 23 16:32:40 localhost ovs-vsctl[1049]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port 35539c92883319b Feb 23 16:32:40 localhost systemd[1]: Started Open vSwitch Delete Transient Ports. Feb 23 16:32:40 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Feb 23 16:32:40 localhost ovs-ctl[1092]: Inserting openvswitch module. Feb 23 16:32:40 localhost kernel: openvswitch: Open vSwitch switching datapath Feb 23 16:32:40 localhost crio[899]: time="2023-02-23 16:32:40.934483736Z" level=info msg="Starting CRI-O, version: 1.25.2-6.rhaos4.12.git3c4e50c.el8, git: unknown(clean)" Feb 23 16:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3080637715-merged.mount: Succeeded. Feb 23 16:32:41 localhost ovs-vswitchd[1105]: ovs|00007|stream_ssl|ERR|SSL_use_certificate_file: error:02001002:system library:fopen:No such file or directory Feb 23 16:32:41 localhost ovs-vswitchd[1105]: ovs|00008|stream_ssl|ERR|SSL_use_PrivateKey_file: error:20074002:BIO routines:file_ctrl:system lib Feb 23 16:32:41 localhost ovs-vswitchd[1105]: ovs|00009|stream_ssl|ERR|failed to load client certificates from /ovn-ca/ca-bundle.crt: error:140AD002:SSL routines:SSL_CTX_use_certificate_file:system lib Feb 23 16:32:41 localhost kernel: device ovs-system entered promiscuous mode Feb 23 16:32:41 localhost kernel: Timeout policy base is empty Feb 23 16:32:41 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Feb 23 16:32:41 localhost systemd-udevd[1109]: Using default interface naming scheme 'rhel-8.0'. Feb 23 16:32:41 localhost systemd-udevd[1109]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:41 localhost systemd-udevd[1109]: Could not generate persistent MAC address for ovs-system: No such file or directory Feb 23 16:32:41 localhost kernel: device ens5 entered promiscuous mode Feb 23 16:32:41 localhost systemd-udevd[1127]: Using default interface naming scheme 'rhel-8.0'. Feb 23 16:32:41 localhost systemd-udevd[1127]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:41 localhost systemd-udevd[1127]: Could not generate persistent MAC address for genev_sys_6081: No such file or directory Feb 23 16:32:41 localhost kernel: device genev_sys_6081 entered promiscuous mode Feb 23 16:32:41 localhost systemd-udevd[1125]: Using default interface naming scheme 'rhel-8.0'. Feb 23 16:32:41 localhost systemd-udevd[1125]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:41 localhost systemd-udevd[1125]: Could not generate persistent MAC address for br-int: No such file or directory Feb 23 16:32:41 localhost kernel: device br-int entered promiscuous mode Feb 23 16:32:41 localhost systemd-udevd[1132]: Using default interface naming scheme 'rhel-8.0'. Feb 23 16:32:41 localhost systemd-udevd[1132]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:41 localhost ovs-ctl[1063]: Starting ovs-vswitchd. Feb 23 16:32:41 localhost kernel: device ovn-k8s-mp0 entered promiscuous mode Feb 23 16:32:41 localhost ovs-ctl[1063]: Enabling remote OVSDB managers. Feb 23 16:32:41 localhost ovs-vsctl[1143]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=localhost Feb 23 16:32:41 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Feb 23 16:32:41 localhost systemd[1]: Starting Open vSwitch... Feb 23 16:32:41 localhost systemd[1]: Started Open vSwitch. Feb 23 16:32:41 localhost systemd[1]: Starting Network Manager... Feb 23 16:32:41 localhost crio[899]: time="2023-02-23 16:32:41.188146447Z" level=info msg="Checking whether cri-o should wipe containers: open /var/run/crio/version: no such file or directory" Feb 23 16:32:41 localhost systemd[1]: crio-wipe.service: Succeeded. Feb 23 16:32:41 localhost systemd[1]: Started CRI-O Auto Update Script. Feb 23 16:32:41 localhost systemd[1]: crio-wipe.service: Consumed 107ms CPU time Feb 23 16:32:41 localhost NetworkManager[1147]: [1677169961.2142] NetworkManager (version 1.36.0-12.el8_6) is starting... (for the first time) Feb 23 16:32:41 localhost NetworkManager[1147]: [1677169961.2145] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-disable-default-plugins.conf, 20-client-id-from-mac.conf) (etc: 20-keyfiles.conf, sdn.conf) Feb 23 16:32:41 localhost systemd[1]: Started Network Manager. Feb 23 16:32:41 localhost NetworkManager[1147]: [1677169961.2191] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 23 16:32:41 localhost systemd[1]: Reached target Network. Feb 23 16:32:41 localhost systemd[1]: Starting OpenSSH server daemon... Feb 23 16:32:41 localhost systemd[1]: Starting Network Manager Wait Online... Feb 23 16:32:41 localhost NetworkManager[1147]: [1677169961.2323] manager[0x560863be4040]: monitoring kernel firmware directory '/lib/firmware'. Feb 23 16:32:41 localhost dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:41 localhost systemd[1]: Starting Hostname Service... Feb 23 16:32:41 localhost sshd[1151]: Server listening on 0.0.0.0 port 22. Feb 23 16:32:41 localhost sshd[1151]: Server listening on :: port 22. Feb 23 16:32:41 localhost systemd[1]: Started OpenSSH server daemon. Feb 23 16:32:41 localhost dbus-daemon[903]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 23 16:32:41 localhost systemd[1]: Started Hostname Service. Feb 23 16:32:41 localhost NetworkManager[1147]: [1677169961.3350] hostname: hostname: using hostnamed Feb 23 16:32:41 localhost NetworkManager[1147]: [1677169961.3354] dns-mgr[0x560863bbf850]: init: dns=default,systemd-resolved rc-manager=symlink Feb 23 16:32:41 localhost NetworkManager[1147]: [1677169961.3354] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Feb 23 16:32:41 localhost.localdomain systemd-hostnamed[1156]: Changed host name to 'localhost.localdomain' Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3447] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-device-plugin-ovs.so) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3475] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-device-plugin-team.so) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3475] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3476] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3477] manager: Networking is enabled by state file Feb 23 16:32:41 localhost.localdomain dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3482] settings: Loaded settings plugin: keyfile (internal) Feb 23 16:32:41 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3525] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.36.0-12.el8_6/libnm-settings-plugin-ifcfg-rh.so") Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3555] dhcp-init: Using DHCP client 'internal' Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3555] device (lo): carrier: link connected Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3558] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3565] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/2) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3569] manager: (ovn-k8s-mp0): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3574] manager: (ens5): new Ethernet device (/org/freedesktop/NetworkManager/Devices/4) Feb 23 16:32:41 localhost.localdomain dbus-daemon[903]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Feb 23 16:32:41 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3630] settings: (ens5): created default wired connection 'Wired connection 1' Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3632] device (ens5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 16:32:41 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): ens5: link is not ready Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3668] device (ens5): carrier: link connected Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3682] device (genev_sys_6081): carrier: link connected Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3684] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3754] manager: (patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3759] manager: (patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/7) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3762] manager: (ovn-72cfee-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3766] manager: (ovn-k8s-mp0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3770] manager: (ens5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3773] manager: (ovn-7dfb31-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3776] manager: (ovn-b823f7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3780] manager: (patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3783] manager: (ovn-061a07-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/14) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3787] manager: (ovn-5a9c4f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3791] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/16) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3795] manager: (patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3798] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3802] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/19) Feb 23 16:32:41 localhost.localdomain ovs-vswitchd[1105]: ovs|00054|bridge|INFO|bridge br-ex: deleted interface ens5 on port 1 Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3856] device (ens5): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:41 localhost.localdomain kernel: device ens5 left promiscuous mode Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3882] policy: auto-activating connection 'Wired connection 1' (eb99b8bd-8e1f-3f41-845b-962703e428f7) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3885] device (ens5): Activation: starting connection 'Wired connection 1' (eb99b8bd-8e1f-3f41-845b-962703e428f7) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3885] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3888] manager: NetworkManager state is now CONNECTING Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3889] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3894] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3912] dhcp4 (ens5): activation: beginning transaction (timeout in 45 seconds) Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3939] dhcp4 (ens5): state changed new lease, address=10.0.136.68 Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3941] policy: set 'Wired connection 1' (ens5) as default for IPv4 routing and DNS Feb 23 16:32:41 localhost.localdomain NetworkManager[1147]: [1677169961.3943] policy: set-hostname: set hostname to 'ip-10-0-136-68' (from DHCPv4) Feb 23 16:32:41 localhost.localdomain dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.resolve1' unit='dbus-org.freedesktop.resolve1.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 16:32:41 ip-10-0-136-68 systemd-hostnamed[1156]: Changed host name to 'ip-10-0-136-68' Feb 23 16:32:41 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found. Feb 23 16:32:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00001|ofproto_dpif_xlate(handler3)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:06,nw_src=10.131.0.14,nw_dst=10.129.2.6,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=40446,tp_dst=9154,tcp_flags=syn Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.4042] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Started Generate console-login-helper-messages issue snippet. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Consumed 19ms CPU time Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Starting Permit User Sessions... Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1187]: Error: Device '' not found. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Started Permit User Sessions. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Started Getty on tty1. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Started Serial Getty on ttyS0. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Reached target Login Prompts. Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1208]: Error: Device '' not found. Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.458 INFO Putting http://169.254.169.254/latest/api/token: Attempt #2 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.460 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.461 INFO Fetch successful Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.461 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.462 INFO Fetch successful Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.462 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.462 INFO Fetch successful Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.462 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.463 INFO Fetch failed with 404: resource not found Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.464 INFO Fetch failed with 404: resource not found Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.464 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.464 INFO Fetch successful Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.464 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.465 INFO Fetch successful Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.465 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.466 INFO Fetch failed with 404: resource not found Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.466 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 23 16:32:41 ip-10-0-136-68 afterburn[921]: Feb 23 16:32:41.466 INFO Fetch successful Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Started Afterburn (Metadata). Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1220]: Error: Device '' not found. Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1245]: Error: Device '' not found. Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + INTERFACE_NAME=ens5 Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + OPERATION=pre-up Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + '[' pre-up '!=' pre-up ']' Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1251]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1252]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + INTERFACE_CONNECTION_UUID=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + '[' eb99b8bd-8e1f-3f41-845b-962703e428f7 == '' ']' Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1257]: ++ nmcli -t -f connection.slave-type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1258]: ++ awk -F : '{print $NF}' Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + INTERFACE_OVS_SLAVE_TYPE= Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + '[' '' '!=' ovs-port ']' Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1249]: + exit 0 Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.5499] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.5501] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.5503] manager: NetworkManager state is now CONNECTED_SITE Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.5505] device (ens5): Activation: successful, device activated. Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.5509] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.5513] manager: startup complete Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Started Network Manager Wait Online. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Starting Fetch kubelet node name from AWS Metadata... Feb 23 16:32:41 ip-10-0-136-68 aws-kubelet-nodename[1266]: Not replacing existing /etc/systemd/system/kubelet.service.d/20-aws-node-name.conf Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Starting Fetch kubelet provider id from AWS Metadata... Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Starting Configures OVS with proper host networking configuration... Feb 23 16:32:41 ip-10-0-136-68 aws-kubelet-providerid[1268]: Not replacing existing /etc/systemd/system/kubelet.service.d/20-aws-providerid.conf Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: aws-kubelet-nodename.service: Succeeded. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Started Fetch kubelet node name from AWS Metadata. Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: aws-kubelet-nodename.service: Consumed 2ms CPU time Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: aws-kubelet-providerid.service: Succeeded. Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + touch /var/run/ovs-config-executed Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + NM_CONN_ETC_PATH=/etc/NetworkManager/system-connections Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + NM_CONN_RUN_PATH=/run/NetworkManager/system-connections Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + NM_CONN_CONF_PATH=/etc/NetworkManager/system-connections Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + NM_CONN_SET_PATH=/run/NetworkManager/system-connections Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + nm_config_changed=0 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_SUFFIX=-slave-ovs-clone Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + BRIDGE_METRIC=48 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + BRIDGE1_METRIC=49 Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Started Fetch kubelet provider id from AWS Metadata. Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + trap handle_exit EXIT Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' /run/NetworkManager/system-connections '!=' /etc/NetworkManager/system-connections ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' /run/NetworkManager/system-connections '!=' /run/NetworkManager/system-connections ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /etc/cno/mtu-migration/config ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Cleaning up left over mtu migration configuration' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: Cleaning up left over mtu migration configuration Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + rm -rf /etc/cno/mtu-migration Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: aws-kubelet-providerid.service: Consumed 2ms CPU time Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1274]: + grep -q openvswitch Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1273]: + rpm -qa Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: Starting Generate console-login-helper-messages issue snippet... Feb 23 16:32:41 ip-10-0-136-68 nm-dispatcher[1305]: Error: Device '' not found. Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + print_state Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Current device, connection, interface and routing state:' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: Current device, connection, interface and routing state: Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1309]: + nmcli -g all device Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1310]: + grep -v unmanaged Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1310]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/4:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli -g all connection Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1314]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677169961:Thu Feb 23 16\:32\:41 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/1:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + ip -d address show Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: inet 127.0.0.1/8 scope host lo Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: valid_lft forever preferred_lft forever Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: inet6 ::1/128 scope host Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: valid_lft forever preferred_lft forever Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: valid_lft 3600sec preferred_lft 3600sec Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: valid_lft forever preferred_lft forever Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: 3: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: link/ether b2:42:31:ac:59:9d brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: 4: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: link/ether 42:2c:b6:47:64:0c brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: inet6 fe80::402c:b6ff:fe47:640c/64 scope link tentative Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: valid_lft forever preferred_lft forever Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: 5: br-int: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: link/ether 1e:70:f2:fd:64:95 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: 6: ovn-k8s-mp0: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: link/ether 2e:5d:2b:01:25:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1318]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:41 ip-10-0-136-68 ovs-vsctl[1340]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.8240] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + ip route show Feb 23 16:32:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00055|bridge|INFO|bridge br-ex: deleted interface patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int on port 2 Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.8241] device (patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal): state change: unmanaged -> activated (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1319]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1319]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 16:32:41 ip-10-0-136-68 ovs-vsctl[1349]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: [1677169961.8243] device (patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal): Activation: successful, device activated. Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + ip -6 route show Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: ((src/libnm-core-impl/nm-connection.c:342)): assertion '' failed Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1320]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1320]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1320]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: ((src/libnm-core-impl/nm-connection.c:342)): assertion '' failed Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' OVNKubernetes == OVNKubernetes ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + ovnk_config_dir=/etc/ovnk Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + ovnk_var_dir=/var/lib/ovnk Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + extra_bridge_file=/etc/ovnk/extra_bridge Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + mkdir -p /etc/ovnk Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + mkdir -p /var/lib/ovnk Feb 23 16:32:41 ip-10-0-136-68 NetworkManager[1147]: ((src/libnm-core-impl/nm-connection.c:342)): assertion '' failed Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ get_iface_default_hint /var/lib/ovnk/iface_default_hint Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1324]: +++ cat /var/lib/ovnk/iface_default_hint Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ local iface_default_hint=ens5 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ '[' ens5 '!=' '' ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ '[' ens5 '!=' br-ex ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ '[' ens5 '!=' br-ex1 ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ '[' -d /sys/class/net/ens5 ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ echo ens5 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1323]: ++ return Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + iface_default_hint=ens5 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ens5 == '' ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /run/configure-ovs-boot-done ']' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Running on boot, restoring previous configuration before proceeding...' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: Running on boot, restoring previous configuration before proceeding... Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + rollback_nm Feb 23 16:32:41 ip-10-0-136-68 ovs-vsctl[1392]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck2017317032-merged.mount: Succeeded. Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1326]: ++ get_bridge_physical_interface ovs-if-phys0 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1326]: ++ local bridge_interface=ovs-if-phys0 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1326]: ++ local physical_interface= Feb 23 16:32:41 ip-10-0-136-68 ovs-vsctl[1395]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 16:32:41 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck2017317032-merged.mount: Consumed 0 CPU time Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1327]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1327]: +++ echo '' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1326]: ++ physical_interface= Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1326]: ++ echo '' Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1269]: + phys0= Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1332]: ++ get_bridge_physical_interface ovs-if-phys1 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1332]: ++ local bridge_interface=ovs-if-phys1 Feb 23 16:32:41 ip-10-0-136-68 configure-ovs.sh[1332]: ++ local physical_interface= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1333]: +++ nmcli -g connection.interface-name conn show ovs-if-phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1333]: +++ echo '' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1332]: ++ physical_interface= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1332]: ++ echo '' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + phys1= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + remove_all_ovn_bridges Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Reverting any previous OVS configuration' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: Reverting any previous OVS configuration Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + remove_ovn_bridges br-ex phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_conf_files br-ex phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/etc/NetworkManager/system-connections Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1338]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.0407] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/20) Feb 23 16:32:42 ip-10-0-136-68 ovs-vsctl[1422]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_set_files br-ex phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/run/NetworkManager/system-connections Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.0409] audit: op="connection-add" uuid="020c436d-b861-4028-95bc-c55069bb3929" name="br-ex" pid=1423 uid=0 result="success" Feb 23 16:32:42 ip-10-0-136-68 ovs-vsctl[1431]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-port br-ex ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1339]: ++ echo /run/NetworkManager/system-connections/br-ex /run/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0 /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0 /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.0836] manager: (ens5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21) Feb 23 16:32:42 ip-10-0-136-68 ovs-vsctl[1440]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-port br-ex br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + remove_ovn_bridges br-ex1 phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/etc/NetworkManager/system-connections Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.0838] audit: op="connection-add" uuid="58f1c73f-4bc8-4311-86a8-c37ad9528b56" name="ovs-port-phys0" pid=1432 uid=0 result="success" Feb 23 16:32:42 ip-10-0-136-68 ovs-vsctl[1461]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists destroy interface ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1347]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.1198] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22) Feb 23 16:32:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00001|ofproto_dpif_xlate(handler11)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:03,nw_src=10.131.0.14,nw_dst=10.129.2.3,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=51886,tp_dst=8443,tcp_flags=syn Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_set_files br-ex1 phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex1 phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/run/NetworkManager/system-connections Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.1199] audit: op="connection-add" uuid="14d1a5a7-ae47-46f4-912b-c05ba0de1b74" name="ovs-port-br-ex" pid=1441 uid=0 result="success" Feb 23 16:32:42 ip-10-0-136-68 ovs-vsctl[1526]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists destroy interface br-ex Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1348]: ++ echo /run/NetworkManager/system-connections/br-ex1 /run/NetworkManager/system-connections/br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex1 /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex1 /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-phys1 /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection /run/NetworkManager/system-connections/ovs-port-phys1 /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.1952] audit: op="connection-add" uuid="4229caea-b1fd-4522-97e4-df156a22a48d" name="ovs-if-phys0" pid=1462 uid=0 result="success" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/br-ex1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'OVS configuration successfully reverted' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: OVS configuration successfully reverted Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + reload_profiles_nm '' '' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 0 -eq 0 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + return Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + print_state Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Current device, connection, interface and routing state:' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: Current device, connection, interface and routing state: Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1350]: + nmcli -g all device Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.5653] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1351]: + grep -v unmanaged Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1351]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/4:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1351]: patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal:ovs-interface:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/7::: Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.5655] audit: op="connection-add" uuid="942f5a8b-5fc3-493a-b9bc-9ab71ef0924c" name="ovs-if-br-ex" pid=1550 uid=0 result="success" Feb 23 16:32:42 ip-10-0-136-68 ovs-vsctl[1567]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli -g all connection Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1355]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677169961:Thu Feb 23 16\:32\:41 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/1:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ip -d address show Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: inet 127.0.0.1/8 scope host lo Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: valid_lft forever preferred_lft forever Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: inet6 ::1/128 scope host Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: valid_lft forever preferred_lft forever Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: valid_lft 3600sec preferred_lft 3600sec Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: valid_lft forever preferred_lft forever Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: 3: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: link/ether b2:42:31:ac:59:9d brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: 4: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: link/ether 42:2c:b6:47:64:0c brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: inet6 fe80::402c:b6ff:fe47:640c/64 scope link tentative Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: valid_lft forever preferred_lft forever Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: 5: br-int: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: link/ether 1e:70:f2:fd:64:95 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: 6: ovn-k8s-mp0: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:42 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: link/ether 2e:5d:2b:01:25:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1359]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00056|bridge|INFO|bridge br-ex: added interface ens5 on port 1 Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1616]: Error: Device '' not found. Feb 23 16:32:42 ip-10-0-136-68 systemd[1]: Started Generate console-login-helper-messages issue snippet. Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ip route show Feb 23 16:32:42 ip-10-0-136-68 kernel: device ens5 entered promiscuous mode Feb 23 16:32:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00057|bridge|INFO|bridge br-ex: using datapath ID 0000dae9315d9345 Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1622]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1622]: + INTERFACE_NAME=br-ex Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1622]: + OPERATION=pre-up Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1622]: + '[' pre-up '!=' pre-up ']' Feb 23 16:32:42 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Consumed 17ms CPU time Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1360]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1360]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 16:32:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00058|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1625]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7193] agent-manager: agent[16fc5518a535818d,:1.71/nmcli-connect/0]: agent registered Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + ip -6 route show Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1626]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7200] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1361]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1361]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1361]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1622]: + INTERFACE_CONNECTION_UUID= Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1622]: + '[' '' == '' ']' Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1622]: + exit 0 Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7205] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + touch /run/configure-ovs-boot-done Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + INTERFACE_NAME=ens5 Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + OPERATION=pre-up Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + '[' pre-up '!=' pre-up ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7208] device (br-ex): Activation: starting connection 'br-ex' (020c436d-b861-4028-95bc-c55069bb3929) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ get_nodeip_interface /var/lib/ovnk/iface_default_hint /etc/ovnk/extra_bridge /run/nodeip-configuration/primary-ip Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ local iface= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ local counter=0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ local extra_bridge_file=/etc/ovnk/extra_bridge Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ local extra_bridge= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ '[' -f /etc/ovnk/extra_bridge ']' Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1632]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7209] audit: op="connection-activate" uuid="020c436d-b861-4028-95bc-c55069bb3929" name="br-ex" pid=1603 uid=0 result="success" Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1364]: +++ get_nodeip_hint_interface /run/nodeip-configuration/primary-ip '' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1364]: +++ local ip_hint= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1364]: +++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1364]: +++ local extra_bridge= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1364]: +++ local iface= Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1634]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7211] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1365]: ++++ get_ip_from_ip_hint_file /run/nodeip-configuration/primary-ip Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1365]: ++++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1365]: ++++ [[ ! -f /run/nodeip-configuration/primary-ip ]] Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1365]: ++++ return Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + INTERFACE_CONNECTION_UUID=4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + '[' 4229caea-b1fd-4522-97e4-df156a22a48d == '' ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7213] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1364]: +++ ip_hint= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1364]: +++ [[ -z '' ]] Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1364]: +++ return Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1640]: ++ nmcli -t -f connection.slave-type conn show 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7217] device (br-ex): Activation: starting connection 'ovs-port-br-ex' (14d1a5a7-ae47-46f4-912b-c05ba0de1b74) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ iface= Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ [[ -n '' ]] Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ '[' 0 -lt 12 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ '[' '' '!=' '' ']' Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1641]: ++ awk -F : '{print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7219] device (ens5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1367]: +++ ip route show default Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + '[' ovs-port '!=' ovs-port ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7223] device (ens5): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1368]: +++ grep -v br-ex1 Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1646]: ++ nmcli -t -f connection.master conn show 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7227] device (ens5): Activation: starting connection 'ovs-port-phys0' (58f1c73f-4bc8-4311-86a8-c37ad9528b56) Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1369]: +++ awk '{ if ($4 == "dev") { print $5; exit } }' Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1647]: ++ awk -F : '{print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7227] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ iface=ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ [[ -n ens5 ]] Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ break Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ '[' ens5 '!=' br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ '[' ens5 '!=' br-ex1 ']' Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + PORT=58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + '[' 58f1c73f-4bc8-4311-86a8-c37ad9528b56 == '' ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7229] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ get_iface_default_hint /var/lib/ovnk/iface_default_hint Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1652]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7230] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1371]: ++++ cat /var/lib/ovnk/iface_default_hint Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1653]: ++ awk -F : '{if( ($1=="58f1c73f-4bc8-4311-86a8-c37ad9528b56" || $3=="58f1c73f-4bc8-4311-86a8-c37ad9528b56") && $2~/^ovs*/) print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7232] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ local iface_default_hint=ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ '[' ens5 '!=' '' ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ '[' ens5 '!=' br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ '[' ens5 '!=' br-ex1 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ '[' -d /sys/class/net/ens5 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ echo ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1370]: +++ return Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + PORT_CONNECTION_UUID=58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + '[' 58f1c73f-4bc8-4311-86a8-c37ad9528b56 == '' ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7235] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 ovs-vsctl[1673]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ iface_default_hint=ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ '[' ens5 '!=' '' ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ '[' ens5 '!=' ens5 ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ '[' ens5 '!=' '' ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ write_iface_default_hint /var/lib/ovnk/iface_default_hint ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ local iface=ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ echo ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1363]: ++ echo ens5 Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1658]: ++ nmcli -t -f connection.slave-type conn show 58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7237] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 chronyd[912]: Source 169.254.169.123 offline Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + iface=ens5 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ens5 '!=' br-ex ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1659]: ++ awk -F : '{print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7238] device (br-ex): Activation: connection 'ovs-port-br-ex' enslaved, continuing activation Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1373]: ++ nmcli connection show --active br-ex Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1630]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7241] device (ens5): disconnecting for new activation request. Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -z '' ']' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Bridge br-ex is not active, restoring previous configuration before proceeding...' Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: Bridge br-ex is not active, restoring previous configuration before proceeding... Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1269]: + rollback_nm Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1664]: ++ nmcli -t -f connection.master conn show 58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7241] device (ens5): state change: activated -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1378]: ++ get_bridge_physical_interface ovs-if-phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1378]: ++ local bridge_interface=ovs-if-phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1378]: ++ local physical_interface= Feb 23 16:32:42 ip-10-0-136-68 nm-dispatcher[1665]: ++ awk -F : '{print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7242] manager: NetworkManager state is now CONNECTING Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1379]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Feb 23 16:32:42 ip-10-0-136-68 configure-ovs.sh[1379]: +++ echo '' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + BRIDGE_NAME=br-ex Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + '[' br-ex '!=' br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + ovs-vsctl list interface ens5 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + declare -A INTERFACES Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + '[' '' ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7248] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1378]: ++ physical_interface= Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1378]: ++ echo '' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1670]: ++ get_interface_ofport_request Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1670]: ++ declare -A ofport_requests Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7260] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + phys0= Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1671]: +++ ovs-vsctl get Interface ens5 ofport Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7262] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1384]: ++ get_bridge_physical_interface ovs-if-phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1384]: ++ local bridge_interface=ovs-if-phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1384]: ++ local physical_interface= Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1670]: ++ local current_ofport=1 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1670]: ++ '[' '' ']' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1670]: ++ echo 1 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1670]: ++ return Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7263] device (ens5): Activation: connection 'ovs-port-phys0' enslaved, continuing activation Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1385]: +++ nmcli -g connection.interface-name conn show ovs-if-phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1385]: +++ echo '' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + INTERFACES[$INTERFACE_NAME]=1 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1630]: + declare -p INTERFACES Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7265] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1384]: ++ physical_interface= Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1384]: ++ echo '' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1692]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1692]: + INTERFACE_NAME=br-ex Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1692]: + OPERATION=pre-up Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1692]: + '[' pre-up '!=' pre-up ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7268] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + phys1= Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + remove_all_ovn_bridges Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Reverting any previous OVS configuration' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: Reverting any previous OVS configuration Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + remove_ovn_bridges br-ex phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_conf_files br-ex phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/etc/NetworkManager/system-connections Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:43 ip-10-0-136-68 kernel: device ens5 left promiscuous mode Feb 23 16:32:43 ip-10-0-136-68 kernel: device ens5 entered promiscuous mode Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1694]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7270] device (ens5): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00059|bridge|INFO|bridge br-ex: deleted interface ens5 on port 1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1390]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1695]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7347] dhcp4 (ens5): canceled DHCP transaction Feb 23 16:32:43 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00060|bridge|INFO|bridge br-ex: added interface ens5 on port 1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_set_files br-ex phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/run/NetworkManager/system-connections Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1692]: + INTERFACE_CONNECTION_UUID= Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1692]: + '[' '' == '' ']' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1692]: + exit 0 Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7347] dhcp4 (ens5): activation: beginning transaction (timeout in 45 seconds) Feb 23 16:32:43 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00061|bridge|INFO|bridge br-ex: using datapath ID 0000a25875862a44 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1391]: ++ echo /run/NetworkManager/system-connections/br-ex /run/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0 /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0 /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + INTERFACE_NAME=ens5 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + OPERATION=pre-up Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' pre-up '!=' pre-up ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7347] dhcp4 (ens5): state changed no lease Feb 23 16:32:43 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00062|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + remove_ovn_bridges br-ex1 phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/etc/NetworkManager/system-connections Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1701]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7479] device (ens5): Activation: starting connection 'ovs-if-phys0' (4229caea-b1fd-4522-97e4-df156a22a48d) Feb 23 16:32:43 ip-10-0-136-68 ovs-vsctl[1765]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1393]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1703]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7507] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:43 ip-10-0-136-68 ovs-vsctl[1877]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_set_files br-ex1 phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex1 phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/run/NetworkManager/system-connections Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:43 ip-10-0-136-68 kernel: device br-ex entered promiscuous mode Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + INTERFACE_CONNECTION_UUID=4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' 4229caea-b1fd-4522-97e4-df156a22a48d == '' ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7508] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00063|netdev|WARN|failed to set MTU for network device br-ex: No such device Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1394]: ++ echo /run/NetworkManager/system-connections/br-ex1 /run/NetworkManager/system-connections/br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex1 /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex1 /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-phys1 /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection /run/NetworkManager/system-connections/ovs-port-phys1 /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1713]: ++ nmcli -t -f connection.slave-type conn show 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7517] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00064|bridge|INFO|bridge br-ex: added interface br-ex on port 65534 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/br-ex1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'OVS configuration successfully reverted' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: OVS configuration successfully reverted Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + reload_profiles_nm '' '' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 0 -eq 0 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + return Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + print_state Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Current device, connection, interface and routing state:' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: Current device, connection, interface and routing state: Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1714]: ++ awk -F : '{print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7519] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00065|bridge|INFO|bridge br-ex: using datapath ID 000002ea92f9d3f3 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1396]: + nmcli -g all device Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' ovs-port '!=' ovs-port ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7587] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 chronyd[912]: Source 169.254.169.123 online Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1397]: + grep -v unmanaged Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1397]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/4:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1397]: patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal:ovs-interface:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/7::: Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1720]: ++ nmcli -t -f connection.master conn show 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7591] device (ens5): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli -g all connection Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1721]: ++ awk -F : '{print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7594] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1401]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677169961:Thu Feb 23 16\:32\:41 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/1:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + PORT=58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' 58f1c73f-4bc8-4311-86a8-c37ad9528b56 == '' ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7645] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ip -d address show Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1731]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7676] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:43 ip-10-0-136-68 mco-hostname[2052]: waiting for non-localhost hostname to be assigned Feb 23 16:32:43 ip-10-0-136-68 mco-hostname[2052]: node identified as ip-10-0-136-68 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: inet 127.0.0.1/8 scope host lo Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: valid_lft forever preferred_lft forever Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: inet6 ::1/128 scope host Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: valid_lft forever preferred_lft forever Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: valid_lft 3600sec preferred_lft 3600sec Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: valid_lft forever preferred_lft forever Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: 3: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: link/ether b2:42:31:ac:59:9d brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: 4: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: link/ether 42:2c:b6:47:64:0c brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: inet6 fe80::402c:b6ff:fe47:640c/64 scope link tentative Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: valid_lft forever preferred_lft forever Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: 5: br-int: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: link/ether 1e:70:f2:fd:64:95 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: 6: ovn-k8s-mp0: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1732]: ++ awk -F : '{if( ($1=="58f1c73f-4bc8-4311-86a8-c37ad9528b56" || $3=="58f1c73f-4bc8-4311-86a8-c37ad9528b56") && $2~/^ovs*/) print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7816] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: link/ether 2e:5d:2b:01:25:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1405]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:43 ip-10-0-136-68 rpc.statd[2063]: Version 2.3.3 starting Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + PORT_CONNECTION_UUID=58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' 58f1c73f-4bc8-4311-86a8-c37ad9528b56 == '' ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7817] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ip route show Feb 23 16:32:43 ip-10-0-136-68 rpc.statd[2063]: Flags: TI-RPC Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1752]: ++ nmcli -t -f connection.slave-type conn show 58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.7823] device (br-ex): Activation: successful, device activated. Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1406]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1406]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1753]: ++ awk -F : '{print $NF}' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.9160] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ip -6 route show Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.9162] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1407]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1407]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1407]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.861057012Z" level=info msg="Starting CRI-O, version: 1.25.2-6.rhaos4.12.git3c4e50c.el8, git: unknown(clean)" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.861174528Z" level=info msg="Node configuration value for hugetlb cgroup is true" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.861182193Z" level=info msg="Node configuration value for pid cgroup is true" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.861223879Z" level=info msg="Node configuration value for memoryswap cgroup is true" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.861229551Z" level=info msg="Node configuration value for cgroup v2 is false" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.870060746Z" level=info msg="Node configuration value for systemd CollectMode is true" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.875089427Z" level=info msg="Node configuration value for systemd AllowedCPUs is true" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.879645363Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL" Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1758]: ++ nmcli -t -f connection.master conn show 58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.9166] device (ens5): Activation: successful, device activated. Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + convert_to_bridge ens5 br-ex phys0 48 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + local iface=ens5 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + local bridge_name=br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + local port_name=phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + local bridge_metric=48 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + local ovs_port=ovs-port-br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + local ovs_interface=ovs-if-br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + local default_port_name=ovs-port-phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + local bridge_interface_name=ovs-if-phys0 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ens5 = br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + nm_config_changed=1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -z ens5 ']' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + iface_mac=02:ea:92:f9:d3:f3 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'MAC address found for iface: ens5: 02:ea:92:f9:d3:f3' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: MAC address found for iface: ens5: 02:ea:92:f9:d3:f3 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1759]: ++ awk -F : '{print $NF}' Feb 23 16:32:42 ip-10-0-136-68 systemd[1]: Starting Generate console-login-helper-messages issue snippet... Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.957465287Z" level=info msg="Checkpoint/restore support disabled" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.957487898Z" level=info msg="Using seccomp default profile when unspecified: true" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.957494159Z" level=info msg="Using the internal default seccomp profile" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.957499267Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.957504273Z" level=info msg="No blockio config file specified, blockio not configured" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.957508699Z" level=info msg="RDT not available in the host system" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1411]: ++ awk '{print $5; exit}' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + BRIDGE_NAME=br-ex Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' br-ex '!=' br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + ovs-vsctl list interface ens5 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + declare -A INTERFACES Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + cat /run/ofport_requests.br-ex Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.9752] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.964001041Z" level=info msg="Conmon does support the --sync option" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.964014990Z" level=info msg="Conmon does support the --log-global-size-max option" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.966960642Z" level=info msg="Conmon does support the --sync option" Feb 23 16:32:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:43.967098427Z" level=info msg="Conmon does support the --log-global-size-max option" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1410]: ++ ip link show ens5 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1764]: declare -A INTERFACES=([ens5]="1" ) Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.9754] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + iface_mtu=9001 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ -z 9001 ]] Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'MTU found for iface: ens5: 9001' Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: MTU found for iface: ens5: 9001 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + source /run/ofport_requests.br-ex Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: ++ INTERFACES=([ens5]="1") Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: ++ declare -A INTERFACES Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + '[' a ']' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1699]: + declare -p INTERFACES Feb 23 16:32:42 ip-10-0-136-68 NetworkManager[1147]: [1677169962.9758] device (br-ex): Activation: successful, device activated. Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1413]: ++ nmcli --fields UUID,DEVICE conn show --active Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1838]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1838]: + INTERFACE_NAME=ens5 Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1838]: + OPERATION=pre-up Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1838]: + '[' pre-up '!=' pre-up ']' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0098] audit: op="connection-update" uuid="020c436d-b861-4028-95bc-c55069bb3929" name="br-ex" args="connection.autoconnect,connection.timestamp" pid=1704 uid=0 result="success" Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1414]: ++ awk '/\sens5\s*$/ {print $1}' Feb 23 16:32:43 ip-10-0-136-68 nm-dispatcher[1840]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0607] agent-manager: agent[d56d00e027809ab7,:1.88/nmcli-connect/0]: agent registered Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + old_conn=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ -z eb99b8bd-8e1f-3f41-845b-962703e428f7 ]] Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli connection show br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + add_nm_conn type ovs-bridge con-name br-ex conn.interface br-ex 802-3-ethernet.mtu 9001 connection.autoconnect-slaves 1 Feb 23 16:32:43 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli c add type ovs-bridge con-name br-ex conn.interface br-ex 802-3-ethernet.mtu 9001 connection.autoconnect-slaves 1 connection.autoconnect no Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1841]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0612] device (ens5): state change: ip-check -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1423]: Connection 'br-ex' (020c436d-b861-4028-95bc-c55069bb3929) successfully added. Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + INTERFACE_CONNECTION_UUID=4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + '[' 4229caea-b1fd-4522-97e4-df156a22a48d == '' ']' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0614] manager: NetworkManager state is now CONNECTED_LOCAL Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli connection show ovs-port-phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists del-port br-ex ens5 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + add_nm_conn type ovs-port conn.interface ens5 master br-ex con-name ovs-port-phys0 connection.autoconnect-slaves 1 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli c add type ovs-port conn.interface ens5 master br-ex con-name ovs-port-phys0 connection.autoconnect-slaves 1 connection.autoconnect no Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1846]: ++ nmcli -t -f connection.slave-type conn show 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0614] device (ens5): releasing ovs interface ens5 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1432]: Connection 'ovs-port-phys0' (58f1c73f-4bc8-4311-86a8-c37ad9528b56) successfully added. Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1847]: ++ awk -F : '{print $NF}' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0615] device (ens5): released from master device ens5 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli connection show ovs-port-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists del-port br-ex br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + add_nm_conn type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli c add type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex connection.autoconnect no Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + '[' ovs-port '!=' ovs-port ']' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0619] device (ens5): disconnecting for new activation request. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1441]: Connection 'ovs-port-br-ex' (14d1a5a7-ae47-46f4-912b-c05ba0de1b74) successfully added. Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1852]: ++ nmcli -t -f connection.master conn show 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0620] audit: op="connection-activate" uuid="4229caea-b1fd-4522-97e4-df156a22a48d" name="ovs-if-phys0" pid=1737 uid=0 result="success" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + extra_phys_args=() Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1853]: ++ awk -F : '{print $NF}' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0632] device (ens5): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1445]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + PORT=58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + '[' 58f1c73f-4bc8-4311-86a8-c37ad9528b56 == '' ']' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0696] device (ens5): Activation: starting connection 'ovs-if-phys0' (4229caea-b1fd-4522-97e4-df156a22a48d) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 802-3-ethernet == vlan ']' Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1858]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0707] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1449]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1859]: ++ awk -F : '{if( ($1=="58f1c73f-4bc8-4311-86a8-c37ad9528b56" || $3=="58f1c73f-4bc8-4311-86a8-c37ad9528b56") && $2~/^ovs*/) print $NF}' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0734] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 802-3-ethernet == bond ']' Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + PORT_CONNECTION_UUID=58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + '[' 58f1c73f-4bc8-4311-86a8-c37ad9528b56 == '' ']' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0738] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1453]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1864]: ++ nmcli -t -f connection.slave-type conn show 58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0740] manager: NetworkManager state is now CONNECTING Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 802-3-ethernet == team ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + iface_type=802-3-ethernet Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' '' = 0 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + extra_phys_args+=(802-3-ethernet.cloned-mac-address "${iface_mac}") Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli connection show ovs-if-phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists destroy interface ens5 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + add_nm_conn type 802-3-ethernet conn.interface ens5 master ovs-port-phys0 con-name ovs-if-phys0 connection.autoconnect-priority 100 connection.autoconnect-slaves 1 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli c add type 802-3-ethernet conn.interface ens5 master ovs-port-phys0 con-name ovs-if-phys0 connection.autoconnect-priority 100 connection.autoconnect-slaves 1 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 connection.autoconnect no Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1865]: ++ awk -F : '{print $NF}' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0742] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1462]: Connection 'ovs-if-phys0' (4229caea-b1fd-4522-97e4-df156a22a48d) successfully added. Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0758] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1466]: ++ nmcli -g connection.uuid conn show ovs-if-phys0 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1870]: ++ nmcli -t -f connection.master conn show 58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0770] device (ens5): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + new_conn=4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1871]: ++ awk -F : '{print $NF}' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0774] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1470]: ++ nmcli -g connection.uuid conn show ovs-port-br-ex Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + BRIDGE_NAME=br-ex Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + '[' br-ex '!=' br-ex ']' Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + ovs-vsctl list interface ens5 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + declare -A INTERFACES Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + cat /run/ofport_requests.br-ex Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0830] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port_conn=14d1a5a7-ae47-46f4-912b-c05ba0de1b74 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + replace_connection_master eb99b8bd-8e1f-3f41-845b-962703e428f7 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local old=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local new=4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1876]: declare -A INTERFACES=([ens5]="1" ) Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.0858] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1474]: ++ nmcli -g UUID connection show Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + source /run/ofport_requests.br-ex Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: ++ INTERFACES=([ens5]="1") Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: ++ declare -A INTERFACES Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + '[' a ']' Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1838]: + declare -p INTERFACES Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4199] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1938]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1938]: + INTERFACE_NAME=br-ex Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1938]: + OPERATION=pre-up Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1938]: + '[' pre-up '!=' pre-up ']' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4200] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1478]: ++ nmcli -g connection.master connection show uuid eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1940]: ++ nmcli -t -f device,type,uuid conn Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4203] manager: NetworkManager state is now CONNECTED_LOCAL Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1941]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4206] device (ens5): Activation: successful, device activated. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1482]: ++ nmcli -g connection.master connection show uuid 020c436d-b861-4028-95bc-c55069bb3929 Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1938]: + INTERFACE_CONNECTION_UUID= Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1938]: + '[' '' == '' ']' Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[1938]: + exit 0 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4381] audit: op="connection-update" uuid="4229caea-b1fd-4522-97e4-df156a22a48d" name="ovs-if-phys0" args="connection.autoconnect,connection.timestamp" pid=1878 uid=0 result="success" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:44 ip-10-0-136-68 nm-dispatcher[2007]: Error: Device '' not found. Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4977] agent-manager: agent[e08bbcb8e7cab869,:1.105/nmcli-connect/0]: agent registered Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1486]: ++ nmcli -g connection.master connection show uuid 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4985] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 58f1c73f-4bc8-4311-86a8-c37ad9528b56 '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4988] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.193730316Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.193761709Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1490]: ++ nmcli -g connection.master connection show uuid 14d1a5a7-ae47-46f4-912b-c05ba0de1b74 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4992] device (br-ex): Activation: starting connection 'ovs-if-br-ex' (942f5a8b-5fc3-493a-b9bc-9ab71ef0924c) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' br-ex '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4992] audit: op="connection-activate" uuid="942f5a8b-5fc3-493a-b9bc-9ab71ef0924c" name="ovs-if-br-ex" pid=1903 uid=0 result="success" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1494]: ++ nmcli -g connection.master connection show uuid 58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4993] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' br-ex '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + replace_connection_master ens5 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local old=ens5 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local new=4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4995] manager: NetworkManager state is now CONNECTING Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1498]: ++ nmcli -g UUID connection show Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4997] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.4999] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.249930473Z" level=warning msg="Could not restore sandbox 01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a: failed to Statfs \"/var/run/netns/a92af5d6-48f2-4cbc-ab67-5e7aee609bd3\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1502]: ++ nmcli -g connection.master connection show uuid eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5009] device (br-ex): Activation: connection 'ovs-if-br-ex' enslaved, continuing activation Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' '!=' ens5 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5095] device (br-ex): set-hw-addr: set-cloned MAC address to 02:EA:92:F9:D3:F3 (02:EA:92:F9:D3:F3) Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.270080472Z" level=warning msg="Deleting all containers under sandbox 01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1506]: ++ nmcli -g connection.master connection show uuid 020c436d-b861-4028-95bc-c55069bb3929 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5105] device (br-ex): carrier: link connected Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' '!=' ens5 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5113] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1510]: ++ nmcli -g connection.master connection show uuid 4229caea-b1fd-4522-97e4-df156a22a48d Feb 23 16:32:43 ip-10-0-136-68 systemd-udevd[1920]: Using default interface naming scheme 'rhel-8.0'. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 58f1c73f-4bc8-4311-86a8-c37ad9528b56 '!=' ens5 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:43 ip-10-0-136-68 systemd-udevd[1920]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.290210404Z" level=warning msg="Could not restore sandbox f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3: failed to Statfs \"/var/run/netns/bd432261-d919-463e-9ad8-453be2170666\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1514]: ++ nmcli -g connection.master connection show uuid 14d1a5a7-ae47-46f4-912b-c05ba0de1b74 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5145] dhcp4 (br-ex): activation: beginning transaction (timeout in 45 seconds) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' br-ex '!=' ens5 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5165] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.303233230Z" level=warning msg="Deleting all containers under sandbox f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3 since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1518]: ++ nmcli -g connection.master connection show uuid 58f1c73f-4bc8-4311-86a8-c37ad9528b56 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5168] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' br-ex '!=' ens5 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + continue Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli connection show ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists destroy interface br-ex Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5170] policy: set 'ovs-if-br-ex' (br-ex) as default for IPv4 routing and DNS Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.333769001Z" level=warning msg="Could not restore sandbox 0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c: failed to Statfs \"/var/run/netns/f36753b3-0496-4a07-9706-b1775a079ccf\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1527]: + nmcli --fields ipv4.method,ipv6.method conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5207] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1528]: + grep manual Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5825] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + extra_if_brex_args= Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5827] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1533]: ++ ip -j a show dev ens5 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5829] manager: NetworkManager state is now CONNECTED_SITE Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1534]: ++ jq '.[0].addr_info | map(. | select(.family == "inet")) | length' Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.351835646Z" level=warning msg="Deleting all containers under sandbox 0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c since it could not be restored" Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5832] device (br-ex): Activation: successful, device activated. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + num_ipv4_addrs=1 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 1 -gt 0 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + extra_if_brex_args+='ipv4.may-fail no ' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.5835] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1536]: ++ ip -j a show dev ens5 Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.6060] audit: op="connection-update" uuid="942f5a8b-5fc3-493a-b9bc-9ab71ef0924c" name="ovs-if-br-ex" args="connection.autoconnect,connection.timestamp" pid=1946 uid=0 result="success" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1537]: ++ jq '.[0].addr_info | map(. | select(.family == "inet6" and .scope != "link")) | length' Feb 23 16:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677169963.6964] audit: op="connections-reload" pid=2035 uid=0 result="success" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + num_ip6_addrs=0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 0 -gt 0 ']' Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: ovs-configuration.service: Succeeded. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1538]: ++ nmcli --get-values ipv4.dhcp-client-id conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Started Configures OVS with proper host networking configuration. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + dhcp_client_id= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -n '' ']' Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: ovs-configuration.service: Consumed 1.117s CPU time Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1542]: ++ nmcli --get-values ipv6.dhcp-duid conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Starting Wait for a non-localhost hostname... Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + dhcp6_client_id= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -n '' ']' Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Started Wait for a non-localhost hostname. Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.370309133Z" level=warning msg="Could not restore sandbox 35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51: failed to Statfs \"/var/run/netns/66097094-74f3-4cd1-b8ec-0513bfaa3c62\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1546]: ++ nmcli --get-values ipv6.addr-gen-mode conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Reached target Network is Online. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ipv6_addr_gen_mode=stable-privacy Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -n stable-privacy ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + extra_if_brex_args+='ipv6.addr-gen-mode stable-privacy ' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + add_nm_conn type ovs-interface slave-type ovs-port conn.interface br-ex master 14d1a5a7-ae47-46f4-912b-c05ba0de1b74 con-name ovs-if-br-ex 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 ipv4.route-metric 48 ipv6.route-metric 48 ipv4.may-fail no ipv6.addr-gen-mode stable-privacy Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli c add type ovs-interface slave-type ovs-port conn.interface br-ex master 14d1a5a7-ae47-46f4-912b-c05ba0de1b74 con-name ovs-if-br-ex 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 ipv4.route-metric 48 ipv6.route-metric 48 ipv4.may-fail no ipv6.addr-gen-mode stable-privacy connection.autoconnect no Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Starting Dynamically sets the system reserved for the kubelet... Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.388764899Z" level=warning msg="Deleting all containers under sandbox 35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51 since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1550]: Connection 'ovs-if-br-ex' (942f5a8b-5fc3-493a-b9bc-9ab71ef0924c) successfully added. Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + configure_driver_options ens5 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + intf=ens5 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /sys/class/net/ens5/device/uevent ']' Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Starting NFS status monitor for NFSv2/3 locking.... Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.399338827Z" level=warning msg="Could not restore sandbox ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689: failed to Statfs \"/var/run/netns/fe9ac55c-60a6-4c99-8e53-9a8d9c2dc37f\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1555]: ++ cat /sys/class/net/ens5/device/uevent Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Started Dynamically sets the system reserved for the kubelet. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1556]: ++ grep DRIVER Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Starting RPC Bind... Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1557]: ++ awk -F = '{print $2}' Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)... Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + driver=ena Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Driver name is' ena Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Driver name is ena Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ena = vmxnet3 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /etc/ovnk/extra_bridge ']' Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Started RPC Bind. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1558]: + nmcli connection show br-ex1 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1558]: + nmcli connection show ovs-if-phys1 Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Started NFS status monitor for NFSv2/3 locking.. Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.412921227Z" level=warning msg="Deleting all containers under sandbox ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689 since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs-vsctl --timeout=30 --if-exists del-br br0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + connections=(br-ex ovs-if-phys0) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Succeeded. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1568]: ++ nmcli -g NAME c Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: Started Generate console-login-helper-messages issue snippet. Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + IFS= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + read -r connection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ Wired connection 1 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + IFS= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + read -r connection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + IFS= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + read -r connection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ ovs-if-br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + IFS= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + read -r connection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ ovs-if-phys0 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + IFS= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + read -r connection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ ovs-port-br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + IFS= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + read -r connection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ ovs-port-phys0 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + IFS= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + read -r connection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + connections+=(ovs-if-br-ex) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + activate_nm_connections br-ex ovs-if-phys0 ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + connections=("$@") Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn in "${connections[@]}" Feb 23 16:32:43 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.service: Consumed 16ms CPU time Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.440739884Z" level=warning msg="Could not restore sandbox 19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae: failed to Statfs \"/var/run/netns/a17440cb-5d23-467a-b4af-09ce6ea96f63\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.475081294Z" level=warning msg="Deleting all containers under sandbox 19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1576]: ++ nmcli -g connection.slave-type connection show br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local slave_type= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' = team ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' = bond ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn in "${connections[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1587]: ++ nmcli -g connection.slave-type connection show ovs-if-phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local slave_type=ovs-port Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ovs-port = team ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ovs-port = bond ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn in "${connections[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1591]: ++ nmcli -g connection.slave-type connection show ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.521474696Z" level=warning msg="Could not restore sandbox 7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8: failed to Statfs \"/var/run/netns/01bf971f-641c-4c4c-8b63-110b0780e79c\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local slave_type=ovs-port Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ovs-port = team ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ovs-port = bond ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + declare -A master_interfaces Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn in "${connections[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1595]: ++ nmcli -g connection.slave-type connection show br-ex Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.534344374Z" level=warning msg="Deleting all containers under sandbox 7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8 since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local slave_type= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local is_slave=false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' = team ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' = bond ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local master_interface Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1599]: ++ nmcli -g GENERAL.STATE conn show br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local active_state= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' == activated ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for i in {1..10} Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Attempt 1 to bring up connection br-ex' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Attempt 1 to bring up connection br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli conn up br-ex Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.571445521Z" level=warning msg="Could not restore sandbox 47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb: failed to Statfs \"/var/run/netns/47731151-d6c2-4983-ad0a-4b809b7855d3\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.590020076Z" level=warning msg="Deleting all containers under sandbox 47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1603]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + s=0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + break Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 0 -eq 0 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Brought up connection br-ex successfully' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Brought up connection br-ex successfully Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli c mod br-ex connection.autoconnect yes Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn in "${connections[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1718]: ++ nmcli -g connection.slave-type connection show ovs-if-phys0 Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.607077193Z" level=warning msg="Could not restore sandbox 324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32: failed to Statfs \"/var/run/netns/f43e09c9-4659-423a-8351-05c8907bbf9e\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.618619000Z" level=warning msg="Deleting all containers under sandbox 324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32 since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local slave_type=ovs-port Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local is_slave=false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ovs-port = team ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ovs-port = bond ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local master_interface Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1728]: ++ nmcli -g GENERAL.STATE conn show ovs-if-phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local active_state=activating Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' activating == activated ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for i in {1..10} Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Attempt 1 to bring up connection ovs-if-phys0' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Attempt 1 to bring up connection ovs-if-phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli conn up ovs-if-phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1737]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + s=0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + break Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 0 -eq 0 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Brought up connection ovs-if-phys0 successfully' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Brought up connection ovs-if-phys0 successfully Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli c mod ovs-if-phys0 connection.autoconnect yes Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for conn in "${connections[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1883]: ++ nmcli -g connection.slave-type connection show ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local slave_type=ovs-port Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local is_slave=false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ovs-port = team ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' ovs-port = bond ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local master_interface Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1890]: ++ nmcli -g GENERAL.STATE conn show ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.675720064Z" level=warning msg="Could not restore sandbox c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd: failed to Statfs \"/var/run/netns/354c29e9-705c-4f87-93ce-2b33c1ed2903\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local active_state= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '' == activated ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for i in {1..10} Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Attempt 1 to bring up connection ovs-if-br-ex' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Attempt 1 to bring up connection ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli conn up ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1903]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + s=0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + break Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 0 -eq 0 ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Brought up connection ovs-if-br-ex successfully' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Brought up connection ovs-if-br-ex successfully Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + false Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli c mod ovs-if-br-ex connection.autoconnect yes Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + try_to_bind_ipv6_address Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + retries=60 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ 60 -eq 0 ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1961]: ++ jq -r 'first(.[] | select(.ifname=="br-ex") | .addr_info[] | select(.scope=="global") | .local)' Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.688979700Z" level=warning msg="Deleting all containers under sandbox c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.707381830Z" level=warning msg="Could not restore sandbox 02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df: failed to Statfs \"/var/run/netns/464ecae5-d083-4bf5-84a3-af8c8873c68a\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1960]: ++ ip -6 -j addr Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ip= Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ '' == '' ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'No ipv6 ip to bind was found' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: No ipv6 ip to bind was found Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + break Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + [[ 60 -eq 0 ]] Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + set_nm_conn_files Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' /etc/NetworkManager/system-connections '!=' /run/NetworkManager/system-connections ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_conf_files br-ex phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/etc/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys0 Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1971]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.717103260Z" level=warning msg="Deleting all containers under sandbox 02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.756300948Z" level=warning msg="Could not restore sandbox c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6: failed to Statfs \"/var/run/netns/dfc40c07-9fb1-4de9-81fb-90a7dd4c4c33\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.770102582Z" level=warning msg="Deleting all containers under sandbox c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6 since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + copy_nm_conn_files /run/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + local dst_path=/run/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1973]: ++ dirname /etc/NetworkManager/system-connections/br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1974]: ++ basename /etc/NetworkManager/system-connections/br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + file=br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping br-ex since it does not exist at source' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping br-ex since it does not exist at source Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1975]: ++ dirname /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1976]: ++ basename /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + file=br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Copying configuration br-ex.nmconnection' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Copying configuration br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + cp /etc/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1978]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.834204253Z" level=warning msg="Could not restore sandbox 2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523: failed to Statfs \"/var/run/netns/9f568b48-2486-4809-94e3-236af56c4fde\": no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.846220302Z" level=warning msg="Deleting all containers under sandbox 2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523 since it could not be restored" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1979]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-if-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-if-br-ex since it does not exist at source' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-if-br-ex since it does not exist at source Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1980]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.859734429Z" level=info msg="cleanup sandbox network" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.859772501Z" level=warning msg="Error encountered when checking whether cri-o should wipe containers: open /var/run/crio/version: no such file or directory" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860588349Z" level=info msg="Successfully cleaned up network for pod c180d7555eeaadb7b53631213d4f92f29e9df605b9662939a8ad7cac193a73bd" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860604925Z" level=info msg="cleanup sandbox network" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860612514Z" level=info msg="Successfully cleaned up network for pod 324987fc5946df3d7849b3d1f0580a276e1156c9514ce8a7c0f7f829492b3e32" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860621242Z" level=info msg="cleanup sandbox network" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860629159Z" level=info msg="Successfully cleaned up network for pod 47661104fee69cd1b9061426289cf385f5b6d7911621b551126dbbdb3ae0f1bb" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860638278Z" level=info msg="cleanup sandbox network" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860652352Z" level=info msg="Successfully cleaned up network for pod 7ef0a0e8143714f998828dd150fede6ba44e51b9926781a8714cc5fb0746afc8" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860678929Z" level=info msg="cleanup sandbox network" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.860639262Z" level=info msg="Serving metrics on :9537 via HTTP" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.861877943Z" level=info msg="Got pod network &{Name:dns-default-h4ftg Namespace:openshift-dns ID:ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689 UID:c072a683-1031-40cb-a1bc-1dac71bca46b NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:44.861999991Z" level=info msg="Deleting pod openshift-dns_dns-default-h4ftg from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:32:44 ip-10-0-136-68 systemd[1]: Started Container Runtime Interface for OCI (CRI-O). Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 systemd[1]: Starting Kubernetes Kubelet... Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1982]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-if-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Copying configuration ovs-if-br-ex.nmconnection' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Copying configuration ovs-if-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + cp /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1985]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1986]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-port-br-ex Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-port-br-ex since it does not exist at source' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-port-br-ex since it does not exist at source Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1987]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1989]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-port-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Copying configuration ovs-port-br-ex.nmconnection' Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: Copying configuration ovs-port-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + cp /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:44 ip-10-0-136-68 configure-ovs.sh[1992]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys0 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1993]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys0 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-if-phys0 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-if-phys0 since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-if-phys0 since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1995]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00001|ofproto_dpif_xlate(handler19)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=7,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:03,nw_src=10.128.2.21,nw_dst=10.129.2.3,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=40876,tp_dst=8443,tcp_flags=syn Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1997]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-if-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Copying configuration ovs-if-phys0.nmconnection' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Copying configuration ovs-if-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + cp /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1999]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys0 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2001]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys0 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-port-phys0 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-port-phys0 since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-port-phys0 since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2002]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2003]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-port-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Copying configuration ovs-port-phys0.nmconnection' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Copying configuration ovs-port-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + cp /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + rm -f /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/br-ex.nmconnection' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Removed nmconnection file /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + nm_config_changed=1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + rm -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + nm_config_changed=1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + rm -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + nm_config_changed=1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + rm -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + nm_config_changed=1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + rm -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + nm_config_changed=1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + base_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_name=br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + port_name=phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_port=ovs-port-br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + ovs_interface=ovs-if-br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + default_port_name=ovs-port-phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + bridge_interface_name=ovs-if-phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2011]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00001|ofproto_dpif_xlate(handler16)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=3,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:06,nw_src=100.64.0.3,nw_dst=10.129.2.6,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=49741,tp_dst=5353,tcp_flags=syn Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -s nullglob Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + shopt -u nullglob Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + copy_nm_conn_files /run/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + local dst_path=/run/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00001|ofproto_dpif_xlate(handler15)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:03,nw_src=10.131.0.14,nw_dst=10.129.2.3,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=40114,tp_dst=8443,tcp_flags=syn Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2012]: ++ dirname /etc/NetworkManager/system-connections/br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2016]: ++ basename /etc/NetworkManager/system-connections/br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping br-ex1 since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping br-ex1 since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2017]: ++ dirname /etc/NetworkManager/system-connections/br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2018]: ++ basename /etc/NetworkManager/system-connections/br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping br-ex1.nmconnection since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping br-ex1.nmconnection since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2019]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2020]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-if-br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-if-br-ex1 since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-if-br-ex1 since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2021]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2022]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-if-br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-if-br-ex1.nmconnection since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-if-br-ex1.nmconnection since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2023]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2024]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-port-br-ex1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-port-br-ex1 since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-port-br-ex1 since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2025]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2026]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-port-br-ex1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-port-br-ex1.nmconnection since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-port-br-ex1.nmconnection since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2027]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2028]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-if-phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-if-phys1 since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-if-phys1 since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2029]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2030]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-if-phys1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-if-phys1.nmconnection since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-if-phys1.nmconnection since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2031]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2032]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-port-phys1 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-port-phys1 since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-port-phys1 since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2033]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.586730 2112 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet. Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + src_path=/etc/NetworkManager/system-connections Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590953 2112 flags.go:64] FLAG: --add-dir-header="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590960 2112 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590965 2112 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590971 2112 flags.go:64] FLAG: --alsologtostderr="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590974 2112 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590979 2112 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590983 2112 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590987 2112 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.590994 2112 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591000 2112 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591003 2112 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591006 2112 flags.go:64] FLAG: --azure-container-registry-config="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591009 2112 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591013 2112 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591016 2112 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591020 2112 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2034]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591023 2112 flags.go:64] FLAG: --cgroup-root="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591025 2112 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591030 2112 flags.go:64] FLAG: --client-ca-file="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591033 2112 flags.go:64] FLAG: --cloud-config="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591035 2112 flags.go:64] FLAG: --cloud-provider="aws" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591037 2112 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591042 2112 flags.go:64] FLAG: --cluster-domain="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591045 2112 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591049 2112 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591053 2112 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591057 2112 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591060 2112 flags.go:64] FLAG: --container-runtime="remote" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591063 2112 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591066 2112 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591069 2112 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591073 2112 flags.go:64] FLAG: --contention-profiling="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591076 2112 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591079 2112 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591082 2112 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591084 2112 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591088 2112 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591092 2112 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591096 2112 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591099 2112 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591101 2112 flags.go:64] FLAG: --enable-server="true" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + file=ovs-port-phys1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Skipping ovs-port-phys1.nmconnection since it does not exist at source' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Skipping ovs-port-phys1.nmconnection since it does not exist at source Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + rm_nm_conn_files Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli connection reload Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + handle_exit Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + e=0 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + '[' 0 -eq 0 ']' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + print_state Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + echo 'Current device, connection, interface and routing state:' Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: Current device, connection, interface and routing state: Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591105 2112 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591110 2112 flags.go:64] FLAG: --event-burst="10" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591113 2112 flags.go:64] FLAG: --event-qps="5" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591117 2112 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591120 2112 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591122 2112 flags.go:64] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591131 2112 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591134 2112 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591138 2112 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591140 2112 flags.go:64] FLAG: --eviction-soft="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591143 2112 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591146 2112 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591149 2112 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591152 2112 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591154 2112 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591159 2112 flags.go:64] FLAG: --feature-gates="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591166 2112 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591169 2112 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591172 2112 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591175 2112 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591178 2112 flags.go:64] FLAG: --healthz-port="10248" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591180 2112 flags.go:64] FLAG: --help="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591183 2112 flags.go:64] FLAG: --hostname-override="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591186 2112 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2039]: + nmcli -g all device Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591189 2112 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591195 2112 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591197 2112 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591200 2112 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591203 2112 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591206 2112 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591209 2112 flags.go:64] FLAG: --iptables-drop-bit="15" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591212 2112 flags.go:64] FLAG: --iptables-masquerade-bit="14" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591215 2112 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591218 2112 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591223 2112 flags.go:64] FLAG: --kube-api-burst="10" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591226 2112 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591229 2112 flags.go:64] FLAG: --kube-api-qps="5" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591231 2112 flags.go:64] FLAG: --kube-reserved="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591234 2112 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591237 2112 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591240 2112 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591244 2112 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591247 2112 flags.go:64] FLAG: --lock-file="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591250 2112 flags.go:64] FLAG: --log-backtrace-at=":0" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591253 2112 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591256 2112 flags.go:64] FLAG: --log-dir="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591259 2112 flags.go:64] FLAG: --log-file="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591262 2112 flags.go:64] FLAG: --log-file-max-size="1800" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591264 2112 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591267 2112 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2040]: + grep -v unmanaged Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2040]: br-ex:ovs-interface:connected:full:full:/org/freedesktop/NetworkManager/Devices/23:ovs-if-br-ex:942f5a8b-5fc3-493a-b9bc-9ab71ef0924c:/org/freedesktop/NetworkManager/ActiveConnection/7 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2040]: ens5:ethernet:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/4:ovs-if-phys0:4229caea-b1fd-4522-97e4-df156a22a48d:/org/freedesktop/NetworkManager/ActiveConnection/6 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2040]: br-ex:ovs-bridge:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/20:br-ex:020c436d-b861-4028-95bc-c55069bb3929:/org/freedesktop/NetworkManager/ActiveConnection/2 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2040]: br-ex:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/22:ovs-port-br-ex:14d1a5a7-ae47-46f4-912b-c05ba0de1b74:/org/freedesktop/NetworkManager/ActiveConnection/3 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2040]: ens5:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/21:ovs-port-phys0:58f1c73f-4bc8-4311-86a8-c37ad9528b56:/org/freedesktop/NetworkManager/ActiveConnection/4 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2040]: patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal:ovs-interface:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/7::: Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591272 2112 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591275 2112 flags.go:64] FLAG: --logging-format="text" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591278 2112 flags.go:64] FLAG: --logtostderr="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591280 2112 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591286 2112 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591289 2112 flags.go:64] FLAG: --manifest-url="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591292 2112 flags.go:64] FLAG: --manifest-url-header="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591297 2112 flags.go:64] FLAG: --master-service-namespace="default" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591299 2112 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591302 2112 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591306 2112 flags.go:64] FLAG: --max-pods="110" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591309 2112 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591312 2112 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591315 2112 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591317 2112 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591320 2112 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591323 2112 flags.go:64] FLAG: --node-ip="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591326 2112 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591331 2112 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591334 2112 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591337 2112 flags.go:64] FLAG: --one-output="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591340 2112 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591343 2112 flags.go:64] FLAG: --pod-cidr="" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + nmcli -g all connection Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591350 2112 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591355 2112 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591357 2112 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591360 2112 flags.go:64] FLAG: --pods-per-core="0" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591362 2112 flags.go:64] FLAG: --port="10250" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591365 2112 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591368 2112 flags.go:64] FLAG: --provider-id="aws:///us-west-2a/i-09b04ed55ff55b4f7" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591371 2112 flags.go:64] FLAG: --qos-reserved="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591374 2112 flags.go:64] FLAG: --read-only-port="10255" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591376 2112 flags.go:64] FLAG: --register-node="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591379 2112 flags.go:64] FLAG: --register-schedulable="true" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591382 2112 flags.go:64] FLAG: --register-with-taints="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591385 2112 flags.go:64] FLAG: --registry-burst="10" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591389 2112 flags.go:64] FLAG: --registry-qps="5" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591392 2112 flags.go:64] FLAG: --reserved-cpus="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591418 2112 flags.go:64] FLAG: --reserved-memory="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591422 2112 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591426 2112 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591430 2112 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591432 2112 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591436 2112 flags.go:64] FLAG: --runonce="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591439 2112 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591442 2112 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591445 2112 flags.go:64] FLAG: --seccomp-default="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591448 2112 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2044]: ovs-if-br-ex:942f5a8b-5fc3-493a-b9bc-9ab71ef0924c:ovs-interface:1677169963:Thu Feb 23 16\:32\:43 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/6:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/7:ovs-port:/run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2044]: br-ex:020c436d-b861-4028-95bc-c55069bb3929:ovs-bridge:1677169962:Thu Feb 23 16\:32\:42 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/2:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/2::/run/NetworkManager/system-connections/br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2044]: ovs-if-phys0:4229caea-b1fd-4522-97e4-df156a22a48d:802-3-ethernet:1677169963:Thu Feb 23 16\:32\:43 2023:yes:100:no:/org/freedesktop/NetworkManager/Settings/5:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/6:ovs-port:/run/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2044]: ovs-port-br-ex:14d1a5a7-ae47-46f4-912b-c05ba0de1b74:ovs-port:1677169962:Thu Feb 23 16\:32\:42 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/4:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/3:ovs-bridge:/run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2044]: ovs-port-phys0:58f1c73f-4bc8-4311-86a8-c37ad9528b56:ovs-port:1677169962:Thu Feb 23 16\:32\:42 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/3:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/4:ovs-bridge:/run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2044]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677169962:Thu Feb 23 16\:32\:42 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/1:no:::::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591450 2112 flags.go:64] FLAG: --skip-headers="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591453 2112 flags.go:64] FLAG: --skip-log-headers="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591455 2112 flags.go:64] FLAG: --stderrthreshold="2" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591458 2112 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591463 2112 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591466 2112 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591469 2112 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591474 2112 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591478 2112 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591480 2112 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591483 2112 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591486 2112 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591489 2112 flags.go:64] FLAG: --system-cgroups="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591491 2112 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591499 2112 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591501 2112 flags.go:64] FLAG: --tls-cert-file="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591504 2112 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591509 2112 flags.go:64] FLAG: --tls-min-version="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591511 2112 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591514 2112 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591518 2112 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591521 2112 flags.go:64] FLAG: --v="2" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591524 2112 flags.go:64] FLAG: --version="false" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591530 2112 flags.go:64] FLAG: --vmodule="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591534 2112 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + ip -d address show Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591538 2112 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.591611 2112 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.595674 2112 server.go:413] "Kubelet version" kubeletVersion="v1.25.4+a34b9e9" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.595698 2112 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.595800 2112 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.595887 2112 feature_gate.go:246] feature gates: &{map[APIPriorityAndFairness:true CSIMigrationAzureFile:false CSIMigrationvSphere:false DownwardAPIHugePages:true RotateKubeletServerCertificate:true]} Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:45.595982 2112 plugins.go:132] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.598750 2112 aws.go:1279] Building AWS cloudprovider Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.598831 2112 aws.go:1239] Zone not specified in configuration file; querying AWS metadata service Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.760917 2112 tags.go:80] AWS cloud filtering on ClusterID: mnguyen-rt-wnslw Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.760938 2112 server.go:555] "Successfully initialized cloud provider" cloudProvider="aws" cloudConfigFile="" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.760949 2112 server.go:993] "Cloud provider determined current node" nodeName="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.760955 2112 server.go:825] "Client rotation is on, will bootstrap in background" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.765639 2112 bootstrap.go:84] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: inet 127.0.0.1/8 scope host lo Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: valid_lft forever preferred_lft forever Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: inet6 ::1/128 scope host Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: valid_lft forever preferred_lft forever Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: 2: ens5: mtu 9001 qdisc mq master ovs-system state UP group default qlen 1000 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 128 maxmtu 9216 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: openvswitch_slave numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: 3: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: link/ether b2:42:31:ac:59:9d brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: 4: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: link/ether 42:2c:b6:47:64:0c brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: inet6 fe80::402c:b6ff:fe47:640c/64 scope link Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: valid_lft forever preferred_lft forever Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: 5: br-int: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: link/ether 1e:70:f2:fd:64:95 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: 6: ovn-k8s-mp0: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: link/ether 2e:5d:2b:01:25:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.765720 2112 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.765913 2112 server.go:882] "Starting client certificate rotation" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.765946 2112 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.767773 2112 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2023-02-24 15:23:10 +0000 UTC, rotation deadline is 2023-02-24 11:21:25.86997725 +0000 UTC Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.767792 2112 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h48m40.102188682s for next certificate rotation Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.787032 2112 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.787156 2112 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.790734 2112 manager.go:163] cAdvisor running in container: "/system.slice/kubelet.service" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.792735 2112 fs.go:133] Filesystem UUIDs: map[54e5ab65-ff73-4a26-8c44-2a9765abf45f:/dev/nvme0n1p3 A94B-67F7:/dev/nvme0n1p2 c83680a9-dcc4-4413-a0a5-4681b35c650a:/dev/nvme0n1p4] Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.792748 2112 fs.go:134] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:25 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:41 fsType:tmpfs blockSize:0}] Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.792779 2112 nvidia.go:54] NVIDIA GPU metrics disabled Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: 7: br-ex: mtu 9001 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute br-ex Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: valid_lft 3600sec preferred_lft 3600sec Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: inet6 fe80::5ac9:d06:d71:ea0a/64 scope link tentative noprefixroute Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2048]: valid_lft forever preferred_lft forever Feb 23 16:32:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:45.886987740Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=2e3a1b5d-4954-44c6-925b-782fdf5c34c8 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:45.890436372Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2e3a1b5d-4954-44c6-925b-782fdf5c34c8 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + ip route show Feb 23 16:32:45 ip-10-0-136-68 systemd[1]: Started Kubernetes Kubelet. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.807710 2112 manager.go:212] Machine: {Timestamp:2023-02-23 16:32:45.807464793 +0000 UTC m=+0.764629273 CPUVendorID:GenuineIntel NumCores:4 NumPhysicalCores:2 NumSockets:1 CpuFrequency:3500000 MemoryCapacity:16514498560 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2d456b0a3e28d0eb2f198315e90643 SystemUUID:ec2d456b-0a3e-28d0-eb2f-198315e90643 BootID:90ff0a1b-14a9-469d-904d-d0496e06da13 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:8257249280 Type:vfs Inodes:2015930 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:8257249280 Type:vfs Inodes:2015930 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:25 Capacity:8257249280 Type:vfs Inodes:2015930 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128300593152 Type:vfs Inodes:62651840 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:41 Capacity:8257249280 Type:vfs Inodes:2015930 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:ea:92:f9:d3:f3 Speed:0 Mtu:9001} {Name:br-int MacAddress:1e:70:f2:fd:64:95 Speed:0 Mtu:8901} {Name:ens5 MacAddress:02:ea:92:f9:d3:f3 Speed:0 Mtu:9001} {Name:genev_sys_6081 MacAddress:42:2c:b6:47:64:0c Speed:0 Mtu:65000} {Name:ovn-k8s-mp0 MacAddress:2e:5d:2b:01:25:48 Speed:0 Mtu:8901} {Name:ovs-system MacAddress:b2:42:31:ac:59:9d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:16514498560 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 2] Caches:[{Id:0 Size:49152 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:1 Threads:[1 3] Caches:[{Id:1 Size:49152 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0}] Caches:[{Id:0 Size:56623104 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.807816 2112 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.807962 2112 manager.go:228] Version: {KernelVersion:4.18.0-372.43.1.rt7.200.el8_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 412.86.202302170236-0 (Ootpa) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.809508 2112 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.809566 2112 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/system.slice/crio.service SystemCgroupsName:/system.slice KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[cpu:{i:{value:500 scale:-3} d:{Dec:} s:500m Format:DecimalSI} ephemeral-storage:{i:{value:1073741824 scale:0} d:{Dec:} s:1Gi Format:BinarySI} memory:{i:{value:1073741824 scale:0} d:{Dec:} s:1Gi Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:4096 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.809580 2112 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.809588 2112 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.810859 2112 manager.go:127] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.811300 2112 server.go:64] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.812098 2112 state_mem.go:36] "Initialized new in-memory state store" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.812146 2112 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.833950 2112 remote_runtime.go:139] "Using CRI v1 runtime API" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.833970 2112 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.863002 2112 remote_image.go:95] "Using CRI v1 image API" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.863021 2112 server.go:993] "Cloud provider determined current node" nodeName="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.863033 2112 server.go:1136] "Using root directory" path="/var/lib/kubelet" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.866113 2112 kubelet.go:393] "Attempting to sync node with API server" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.866129 2112 kubelet.go:282] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2049]: default via 10.0.128.1 dev br-ex proto dhcp src 10.0.136.68 metric 48 Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2049]: 10.0.128.0/19 dev br-ex proto kernel scope link src 10.0.136.68 metric 48 Feb 23 16:32:45 ip-10-0-136-68 systemd[1]: Reached target Multi-User System. Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.866155 2112 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.866164 2112 kubelet.go:293] "Adding apiserver pod source" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.866182 2112 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.869735 2112 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="cri-o" version="1.25.2-6.rhaos4.12.git3c4e50c.el8" apiVersion="v1" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.872175 2112 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.875933 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.875944 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.875951 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.875957 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.875963 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/cinder" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.875971 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.875976 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.875987 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876824 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876838 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876850 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876859 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876869 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876885 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876896 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/glusterfs" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876905 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876914 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876925 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876935 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876945 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876956 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.876984 2112 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.877134 2112 server.go:1175] "Started kubelet" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.879089 2112 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.880682 2112 server.go:438] "Adding debug handlers to kubelet server" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: E0223 16:32:45.882056 2112 kubelet.go:1333] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.885309 2112 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.885334 2112 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.885613 2112 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate expiration is 2023-02-24 15:23:10 +0000 UTC, rotation deadline is 2023-02-24 09:26:53.188975612 +0000 UTC Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.885624 2112 certificate_manager.go:270] kubernetes.io/kubelet-serving: Waiting 16h54m7.303352757s for next certificate rotation Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.885716 2112 volume_manager.go:291] "The desired_state_of_world populator starts" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.885722 2112 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.885759 2112 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.887429 2112 factory.go:153] Registering CRI-O factory Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.887440 2112 factory.go:55] Registering systemd factory Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.887556 2112 factory.go:103] Registering Raw factory Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.889849 2112 manager.go:1201] Started watching for new ooms in manager Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.890402 2112 manager.go:302] Starting recovery of all containers Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + ip -6 route show Feb 23 16:32:45 ip-10-0-136-68 systemd[1]: Reached target Graphical Interface. Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2050]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2050]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[2050]: fe80::/64 dev br-ex proto kernel metric 1024 pref medium Feb 23 16:32:45 ip-10-0-136-68 systemd[1]: Starting Update UTMP about System Runlevel Changes... Feb 23 16:32:45 ip-10-0-136-68 configure-ovs.sh[1269]: + exit 0 Feb 23 16:32:45 ip-10-0-136-68 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded. Feb 23 16:32:45 ip-10-0-136-68 systemd[1]: Started Update UTMP about System Runlevel Changes. Feb 23 16:32:45 ip-10-0-136-68 systemd[1]: Startup finished in 1.242s (kernel) + 4.375s (initrd) + 8.000s (userspace) = 13.618s. Feb 23 16:32:45 ip-10-0-136-68 systemd[1]: systemd-update-utmp-runlevel.service: Consumed 5ms CPU time Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.989127 2112 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.989285 2112 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.989369 2112 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.989416 2112 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.989429 2112 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.989442 2112 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 16:32:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:45.989482 2112 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.014344 2112 manager.go:307] Recovery completed Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.062176 2112 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.079700 2112 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.079843 2112 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.079865 2112 state_mem.go:36] "Initialized new in-memory state store" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.082042 2112 policy_none.go:49] "None policy: Start" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.082739 2112 memory_manager.go:168] "Starting memorymanager" policy="None" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.082842 2112 state_mem.go:35] "Initializing new in-memory state store" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.084790 2112 container_manager_linux.go:427] "Updating kernel flag" flag="vm/overcommit_memory" expectedValue=1 actualValue=0 Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.084871 2112 container_manager_linux.go:427] "Updating kernel flag" flag="kernel/panic" expectedValue=10 actualValue=0 Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-besteffort.slice. Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.112606 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.112857 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.112928 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.113011 2112 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.113713 2112 manager.go:273] "Starting Device Plugin manager" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.114188 2112 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.114257 2112 server.go:77] "Starting device plugin registration server" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.115773 2112 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.115792 2112 status_manager.go:161] "Starting to sync pod status with apiserver" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.115837 2112 kubelet.go:2033] "Starting kubelet main sync loop" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 16:32:46.115875 2112 kubelet.go:2057] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.116281 2112 plugin_watcher.go:52] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.116414 2112 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.116487 2112 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.130862 2112 kubelet_node_status.go:110] "Node was previously registered" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.130927 2112 kubelet_node_status.go:75] "Successfully registered node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.132637 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.132681 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.132693 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.132711 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeReady" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.132722 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeNotSchedulable" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.216231 2112 kubelet.go:2119] "SyncLoop ADD" source="file" pods=[] Feb 23 16:32:46 ip-10-0-136-68 chronyd[912]: Selected source 169.254.169.123 Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.871795 2112 apiserver.go:52] "Watching apiserver" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.877755 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-dns/node-resolver-pgc9j openshift-image-registry/node-ca-wdtzq openshift-multus/multus-gr76d openshift-multus/multus-additional-cni-plugins-p9nj2 openshift-monitoring/node-exporter-hw8fk openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4 openshift-cluster-node-tuning-operator/tuned-bjpgx openshift-ingress-canary/ingress-canary-p47qk openshift-ovn-kubernetes/ovnkube-node-qc5bl openshift-network-diagnostics/network-check-target-b2mxx openshift-machine-config-operator/machine-config-daemon-d5wlc openshift-multus/network-metrics-daemon-5hc5d openshift-dns/dns-default-h4ftg] Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.877791 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.877854 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.877898 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.877934 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.877975 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.878337 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.881712 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.882084 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.882521 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.883916 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.884897 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.885064 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.885212 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod507b846f_eb8a_4ca3_9d5f_e4d9f18eca32.slice. Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.893923 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.893964 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.893994 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894020 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhxvk\" (UniqueName: \"kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894049 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2zwz\" (UniqueName: \"kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894080 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894110 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894134 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfmxf\" (UniqueName: \"kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf\") pod \"ingress-canary-p47qk\" (UID: \"a704838c-aeb5-4709-b91c-2460423203a4\") " pod="openshift-ingress-canary/ingress-canary-p47qk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894152 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894170 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894186 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894202 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9xlt\" (UniqueName: \"kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894220 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894242 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894260 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894281 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894298 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894318 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwlz\" (UniqueName: \"kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894335 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894360 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894379 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894395 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894412 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894431 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894448 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894465 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-796v8\" (UniqueName: \"kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894494 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894511 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894529 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894544 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894560 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894575 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894591 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894606 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894623 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894638 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894652 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894668 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29j2\" (UniqueName: \"kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894694 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894710 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894726 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2sj\" (UniqueName: \"kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894768 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894786 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894801 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894820 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894835 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894853 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894868 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894885 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894920 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894947 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.894976 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895011 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895044 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895072 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895101 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895130 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895155 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895178 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895206 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895235 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895262 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqpfc\" (UniqueName: \"kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895292 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74mgq\" (UniqueName: \"kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895320 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895348 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895378 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895409 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895440 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4glw\" (UniqueName: \"kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895474 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdk85\" (UniqueName: \"kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895504 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:46.895516 2112 reconciler.go:169] "Reconciler: start to sync state" Feb 23 16:32:46 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:46.899461 2112 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podecd261a9_4d88_4e3d_aa47_803a685b6569.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod409b8d00_553f_43cb_8805_64a5931be933.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod9cd26ba5_46e4_40b5_81e6_74079153d58d.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod2c47bc3e_0247_4d47_80e3_c168262e7976.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod07267a40_e316_4a88_91a5_11bc06672f23.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-poda704838c_aeb5_4709_b91c_2460423203a4.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod6d75c369_887c_42d2_94c1_40cd36f882c3.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod75f4efab_251e_4aa5_97d6_4a2a27025ae1.slice. Feb 23 16:32:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podc072a683_1031_40cb_a1bc_1dac71bca46b.slice. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podffd2cee3_1bae_4941_8015_2b3ade383d85.slice. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podb97e7fe5_fe52_4769_bb52_fc233e05c05e.slice. Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.011206 2112 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97e7fe5_fe52_4769_bb52_fc233e05c05e.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97e7fe5_fe52_4769_bb52_fc233e05c05e.slice: no such file or directory Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.012936 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.012988 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013033 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013062 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013089 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013132 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013162 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013197 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013232 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-m29j2\" (UniqueName: \"kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013266 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013316 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013347 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2sj\" (UniqueName: \"kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013379 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013410 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013441 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013476 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013512 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013546 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013619 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013661 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013687 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013730 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013815 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013863 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013902 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013917 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013934 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013971 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.013987 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014022 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014052 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014080 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014107 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014134 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014161 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014201 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014225 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-jqpfc\" (UniqueName: \"kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014248 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-74mgq\" (UniqueName: \"kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014280 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014311 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014345 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014365 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014381 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014409 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-v4glw\" (UniqueName: \"kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014444 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-vdk85\" (UniqueName: \"kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014472 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014495 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014521 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014542 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014563 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-xhxvk\" (UniqueName: \"kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014585 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-w2zwz\" (UniqueName: \"kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014610 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014633 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014650 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-nfmxf\" (UniqueName: \"kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf\") pod \"ingress-canary-p47qk\" (UID: \"a704838c-aeb5-4709-b91c-2460423203a4\") " pod="openshift-ingress-canary/ingress-canary-p47qk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014668 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014687 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014704 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014722 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-k9xlt\" (UniqueName: \"kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014817 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014837 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014854 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014887 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014912 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014943 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-2jwlz\" (UniqueName: \"kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014959 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.014991 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015029 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015059 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015090 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015111 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015161 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015197 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-796v8\" (UniqueName: \"kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015227 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015623 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015697 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.015772 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.017333 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.017593 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.017722 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.017949 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.017948 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.018004 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.018070 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.018076 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.018117 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.018501 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.019517 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.019699 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.020184 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.020268 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.020997 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.021250 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.021304 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.022019 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.022107 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.022166 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.022236 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.023199 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.024299 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.024592 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.024657 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.024819 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.025813 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.025883 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026088 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026151 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026237 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026337 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026433 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026586 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026606 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026643 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026663 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026694 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026759 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026804 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.026852 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.027445 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.027787 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.027861 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.028559 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.028622 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.028672 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.029793 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.030283 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.031271 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.031870 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod5acce570_9f3b_4dab_9fed_169a4c110f8c.slice. Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.063086 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhxvk\" (UniqueName: \"kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk\") pod \"aws-ebs-csi-driver-node-5hqp4\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.063579 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"network-check-target-b2mxx\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.065087 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9xlt\" (UniqueName: \"kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt\") pod \"ovnkube-node-qc5bl\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.065238 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfmxf\" (UniqueName: \"kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf\") pod \"ingress-canary-p47qk\" (UID: \"a704838c-aeb5-4709-b91c-2460423203a4\") " pod="openshift-ingress-canary/ingress-canary-p47qk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.065819 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2zwz\" (UniqueName: \"kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz\") pod \"dns-default-h4ftg\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.066763 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29j2\" (UniqueName: \"kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2\") pod \"machine-config-daemon-d5wlc\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.068005 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-796v8\" (UniqueName: \"kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8\") pod \"tuned-bjpgx\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.071554 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2sj\" (UniqueName: \"kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj\") pod \"multus-additional-cni-plugins-p9nj2\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.080658 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jwlz\" (UniqueName: \"kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz\") pod \"network-metrics-daemon-5hc5d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.091830 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4glw\" (UniqueName: \"kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw\") pod \"multus-gr76d\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.093998 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqpfc\" (UniqueName: \"kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc\") pod \"node-ca-wdtzq\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.095048 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-74mgq\" (UniqueName: \"kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq\") pod \"node-resolver-pgc9j\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " pod="openshift-dns/node-resolver-pgc9j" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.099835 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdk85\" (UniqueName: \"kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85\") pod \"node-exporter-hw8fk\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.200867 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pgc9j" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.201548565Z" level=info msg="Running pod sandbox: openshift-dns/node-resolver-pgc9j/POD" id=e0de3eee-f553-4bdd-81a9-c031e8c51a45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.201849889Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.212901 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wdtzq" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.213567089Z" level=info msg="Running pod sandbox: openshift-image-registry/node-ca-wdtzq/POD" id=64e89843-5bd0-48dc-ab92-93bd2cf99047 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.213763613Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.222081 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.225355526Z" level=info msg="Running pod sandbox: openshift-ovn-kubernetes/ovnkube-node-qc5bl/POD" id=83c8fa69-7c1a-4f1e-936f-d51a2c2e6b69 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.225390413Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.234794 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hc5d" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.235068104Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-5hc5d/POD" id=29fa4548-fcb2-4fa5-a662-55744709066a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.235185768Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.246004456Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=e0de3eee-f553-4bdd-81a9-c031e8c51a45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.248233219Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=83c8fa69-7c1a-4f1e-936f-d51a2c2e6b69 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.250440 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.250687681Z" level=info msg="Running pod sandbox: openshift-multus/multus-additional-cni-plugins-p9nj2/POD" id=85208ac7-e37a-4d3d-a791-cba73d9c9325 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.250718735Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.264056 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.264323028Z" level=info msg="Running pod sandbox: openshift-cluster-node-tuning-operator/tuned-bjpgx/POD" id=20b4593d-8269-42d1-be08-4488d1b50a62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.264360167Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.271773 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p47qk" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.272054945Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-p47qk/POD" id=05f160f8-9003-4ed9-8cdd-3b38e3c1e346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.272091061Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.280213 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.280453803Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/POD" id=d574f61c-c945-47c9-a8a3-be2c8a1fc96a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.280493917Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.289987 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-hw8fk" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.290220204Z" level=info msg="Running pod sandbox: openshift-monitoring/node-exporter-hw8fk/POD" id=f92c40f9-70d7-4c37-bfca-147ebd0a8b02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.290276347Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.291868 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507b846f_eb8a_4ca3_9d5f_e4d9f18eca32.slice/crio-e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68.scope WatchSource:0}: Error finding container e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68: Status 404 returned error can't find the container with id e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68 Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.293821 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod409b8d00_553f_43cb_8805_64a5931be933.slice/crio-15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586.scope WatchSource:0}: Error finding container 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586: Status 404 returned error can't find the container with id 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586 Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.295596951Z" level=info msg="Ran pod sandbox e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68 with infra container: openshift-dns/node-resolver-pgc9j/POD" id=e0de3eee-f553-4bdd-81a9-c031e8c51a45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.295663392Z" level=info msg="Ran pod sandbox 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586 with infra container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/POD" id=83c8fa69-7c1a-4f1e-936f-d51a2c2e6b69 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.297282 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.297315945Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72" id=3ce4f34f-210e-439a-93ac-04232eb6c4f9 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.297318963Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=dbbdb933-30af-4ed8-9830-f892299185f3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.297540251Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-h4ftg/POD" id=65c4060c-5105-4dff-a1cb-3be5138805f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.297569304Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.299102109Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=64e89843-5bd0-48dc-ab92-93bd2cf99047 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.299239111Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51a3c087e00a8d3916cceaab8f2064078ba13c2bdd41a167107c7318b2bff862,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72],Size_:480914545,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3ce4f34f-210e-439a-93ac-04232eb6c4f9 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.299130698Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dbbdb933-30af-4ed8-9830-f892299185f3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.300002572Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72" id=92a695a2-3b28-4206-90af-dfc2df05049a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.300166145Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51a3c087e00a8d3916cceaab8f2064078ba13c2bdd41a167107c7318b2bff862,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9feb297a0007232cd124c9d0c94360b6ad35b81350b2b55469efc14fda48c72],Size_:480914545,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=92a695a2-3b28-4206-90af-dfc2df05049a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.301269041Z" level=info msg="Creating container: openshift-dns/node-resolver-pgc9j/dns-node-resolver" id=659ad71e-4e0e-4db3-8e36-1d53d73dacbe name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.301343875Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.301607488Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=e0def034-b3cc-4190-a5bc-aaa3a63b4a90 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.301779903Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e0def034-b3cc-4190-a5bc-aaa3a63b4a90 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.302393131Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-controller" id=673fc1b0-e530-4e29-8c6a-ac79024db355 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.302571654Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.305653 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gr76d" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.306059560Z" level=info msg="Running pod sandbox: openshift-multus/multus-gr76d/POD" id=f7d45e91-4cad-42ec-9dd2-856fd193d724 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.306099040Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.314942 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.315160190Z" level=info msg="Running pod sandbox: openshift-machine-config-operator/machine-config-daemon-d5wlc/POD" id=c5fa858d-f194-4e3a-a2b1-96bb979fe45f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.315203209Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.324951 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecd261a9_4d88_4e3d_aa47_803a685b6569.slice/crio-ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc.scope WatchSource:0}: Error finding container ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc: Status 404 returned error can't find the container with id ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.326610916Z" level=info msg="Ran pod sandbox ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc with infra container: openshift-image-registry/node-ca-wdtzq/POD" id=64e89843-5bd0-48dc-ab92-93bd2cf99047 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.327564423Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=797beb38-101d-4437-9fd2-a9a00afe5d33 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.337590885Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b8111819f25b8194478d55593ca125a634ee92d9d5e61866f09e80f1b59af18b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae],Size_:428240621,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=797beb38-101d-4437-9fd2-a9a00afe5d33 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.338138448Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=cc76b3d7-ac52-4518-b014-e9379a116301 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.349521928Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b8111819f25b8194478d55593ca125a634ee92d9d5e61866f09e80f1b59af18b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae],Size_:428240621,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=cc76b3d7-ac52-4518-b014-e9379a116301 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:47.350116 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.350192016Z" level=info msg="Creating container: openshift-image-registry/node-ca-wdtzq/node-ca" id=53e1661b-6024-444b-9f9f-4aa79fdb6d38 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.350279325Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.350359879Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-b2mxx/POD" id=e4337cf1-d02f-4fbb-9df4-d305063238c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.350395697Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.359058511Z" level=info msg="Got pod network &{Name:network-metrics-daemon-5hc5d Namespace:openshift-multus ID:5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4 UID:9cd26ba5-46e4-40b5-81e6-74079153d58d NetNS:/var/run/netns/199a84df-cf71-443f-81b3-b9ff5e18be9d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.359083178Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-5hc5d to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.437002628Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=20b4593d-8269-42d1-be08-4488d1b50a62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.440420282Z" level=info msg="Got pod network &{Name:ingress-canary-p47qk Namespace:openshift-ingress-canary ID:4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4 UID:a704838c-aeb5-4709-b91c-2460423203a4 NetNS:/var/run/netns/7c821687-d150-4ab8-9614-66d1ccd9f281 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.440448704Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-p47qk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.466704 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07267a40_e316_4a88_91a5_11bc06672f23.slice/crio-528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05.scope WatchSource:0}: Error finding container 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05: Status 404 returned error can't find the container with id 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05 Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.469351739Z" level=info msg="Ran pod sandbox 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05 with infra container: openshift-cluster-node-tuning-operator/tuned-bjpgx/POD" id=20b4593d-8269-42d1-be08-4488d1b50a62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.473130329Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f" id=28a742d8-3852-49cf-aab4-c48b0443381f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.473447767Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:da914cc3ef13e76e0445e95dcaf766ba4641f9f983cbc16823ff667af167973f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f],Size_:602733635,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=28a742d8-3852-49cf-aab4-c48b0443381f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started crio-conmon-893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b.scope. Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.494500365Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f" id=a3c9f053-dcdc-4dcf-8684-f79ed84e62e7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.494913228Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:da914cc3ef13e76e0445e95dcaf766ba4641f9f983cbc16823ff667af167973f,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d69f987d8bd10c16f608e7b0f3e6bb52d5d62147af51f716a3d982c7eae0b83f],Size_:602733635,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a3c9f053-dcdc-4dcf-8684-f79ed84e62e7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.494925703Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=d574f61c-c945-47c9-a8a3-be2c8a1fc96a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.495784993Z" level=info msg="Creating container: openshift-cluster-node-tuning-operator/tuned-bjpgx/tuned" id=df1c6fd2-1205-4c63-b053-c56217a2ce16 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.495904030Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.499208111Z" level=info msg="Got pod network &{Name:network-check-target-b2mxx Namespace:openshift-network-diagnostics ID:5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b UID:5acce570-9f3b-4dab-9fed-169a4c110f8c NetNS:/var/run/netns/755b1556-e233-443e-89fd-f6504a4e73db Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.499236826Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-b2mxx to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.502164227Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=f92c40f9-70d7-4c37-bfca-147ebd0a8b02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.504301743Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=85208ac7-e37a-4d3d-a791-cba73d9c9325 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.506780007Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=f7d45e91-4cad-42ec-9dd2-856fd193d724 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.509420811Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=c5fa858d-f194-4e3a-a2b1-96bb979fe45f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.511858 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d75c369_887c_42d2_94c1_40cd36f882c3.slice/crio-7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16.scope WatchSource:0}: Error finding container 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16: Status 404 returned error can't find the container with id 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16 Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.513306357Z" level=info msg="Ran pod sandbox cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6 with infra container: openshift-multus/multus-gr76d/POD" id=f7d45e91-4cad-42ec-9dd2-856fd193d724 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.514104757Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=ae9bd771-01bd-4d5a-8d11-17ddc8860639 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.520072186Z" level=info msg="Ran pod sandbox 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16 with infra container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/POD" id=d574f61c-c945-47c9-a8a3-be2c8a1fc96a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.520767317Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211" id=a05d6a05-e110-475c-b61f-e71bf2ec4310 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.522028351Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3040fba25f1de00fc7180165bb6fe53ee7a27a50b0d5da5af3a7e0d26700e224,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94],Size_:487631698,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ae9bd771-01bd-4d5a-8d11-17ddc8860639 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.522172567Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b5b4a5c846650de70c23db1f0578a6656eada15483b87f39bace9bab24bf86dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211],Size_:433653219,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a05d6a05-e110-475c-b61f-e71bf2ec4310 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.522835941Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=61af7718-e5cf-4863-ae97-1bec33ca75e3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.522982579Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3040fba25f1de00fc7180165bb6fe53ee7a27a50b0d5da5af3a7e0d26700e224,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94],Size_:487631698,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=61af7718-e5cf-4863-ae97-1bec33ca75e3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.523170515Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211" id=b9f3d7ce-3619-456f-8d8f-ccd060b5b1a0 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.523347556Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b5b4a5c846650de70c23db1f0578a6656eada15483b87f39bace9bab24bf86dd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8c316907a417429b45d4dbd9871b7f65b3629737ac88305860b53d964cc1211],Size_:433653219,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b9f3d7ce-3619-456f-8d8f-ccd060b5b1a0 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.524046902Z" level=info msg="Creating container: openshift-multus/multus-gr76d/kube-multus" id=249cb0a2-d074-4757-8f1a-6d2da180a022 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.524148287Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.524184861Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-driver" id=a543b596-e68a-4313-8f77-adbad496d7a7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.524259242Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.524318 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c47bc3e_0247_4d47_80e3_c168262e7976.slice/crio-9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962.scope WatchSource:0}: Error finding container 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962: Status 404 returned error can't find the container with id 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.527519 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75f4efab_251e_4aa5_97d6_4a2a27025ae1.slice/crio-fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f.scope WatchSource:0}: Error finding container fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f: Status 404 returned error can't find the container with id fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.528241495Z" level=info msg="Ran pod sandbox 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 with infra container: openshift-multus/multus-additional-cni-plugins-p9nj2/POD" id=85208ac7-e37a-4d3d-a791-cba73d9c9325 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.528919133Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677" id=9553e86d-1a84-4ac2-8c32-348443f53ae7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.529162854Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ba712ec683a435fa3ef8304fb00385fae95fbc045a82b8d2a9dc39ecd09e344,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677],Size_:438806970,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9553e86d-1a84-4ac2-8c32-348443f53ae7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.529881883Z" level=info msg="Ran pod sandbox fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f with infra container: openshift-monitoring/node-exporter-hw8fk/POD" id=f92c40f9-70d7-4c37-bfca-147ebd0a8b02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:47.530512 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97e7fe5_fe52_4769_bb52_fc233e05c05e.slice/crio-948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809.scope WatchSource:0}: Error finding container 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809: Status 404 returned error can't find the container with id 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809 Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.531057353Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=bb36db09-455f-4494-bd57-150ea82214ff name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.531219829Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05b7e8a1fbf3debab1b6ffc89b3540da9556cf7f25a65af04bd4766ad373fac6,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613],Size_:332676464,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bb36db09-455f-4494-bd57-150ea82214ff name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.531292782Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677" id=5f544602-a9db-4410-a6c5-d744bf5c5d2d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.531427723Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9ba712ec683a435fa3ef8304fb00385fae95fbc045a82b8d2a9dc39ecd09e344,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c09f9b6acade83cd0cf46f184be48f3fe4a7aa4a4e1b3030d71f36ae6845677],Size_:438806970,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5f544602-a9db-4410-a6c5-d744bf5c5d2d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.532765209Z" level=info msg="Ran pod sandbox 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809 with infra container: openshift-machine-config-operator/machine-config-daemon-d5wlc/POD" id=c5fa858d-f194-4e3a-a2b1-96bb979fe45f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.532921234Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/egress-router-binary-copy" id=be94a14e-0d49-498a-b1cf-d5b8ebbbf219 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.533019403Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.533242890Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33b9e1b6e5c77f3e083119aa70ed79556540eb896e3b1f4f07792f213e06286a" id=f822279d-ca46-40b7-bd35-bf08b97da44e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.533478288Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=e084e527-3b3c-47ac-b84b-9f6b35e5f390 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.542308054Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b6b4f5d89be886f7fe1b314e271801bcae46a3912b44c41a3565ca13b6db4e66,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33b9e1b6e5c77f3e083119aa70ed79556540eb896e3b1f4f07792f213e06286a],Size_:537394443,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f822279d-ca46-40b7-bd35-bf08b97da44e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.542687245Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05b7e8a1fbf3debab1b6ffc89b3540da9556cf7f25a65af04bd4766ad373fac6,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613],Size_:332676464,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e084e527-3b3c-47ac-b84b-9f6b35e5f390 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.543339179Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33b9e1b6e5c77f3e083119aa70ed79556540eb896e3b1f4f07792f213e06286a" id=7c3d24d4-1588-479e-8ff2-d61431992b4f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.543410818Z" level=info msg="Creating container: openshift-monitoring/node-exporter-hw8fk/init-textfile" id=7766ef6a-56e7-4ff5-8bf4-2c20174f312f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.543490821Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.543500399Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b6b4f5d89be886f7fe1b314e271801bcae46a3912b44c41a3565ca13b6db4e66,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33b9e1b6e5c77f3e083119aa70ed79556540eb896e3b1f4f07792f213e06286a],Size_:537394443,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7c3d24d4-1588-479e-8ff2-d61431992b4f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.544199466Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-d5wlc/machine-config-daemon" id=8e45db9b-1ae4-4050-b995-e68b6d566f96 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.544286988Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started crio-conmon-d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6.scope. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started libcontainer container 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b. Feb 23 16:32:47 ip-10-0-136-68 kernel: cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started libcontainer container d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started crio-conmon-9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677.scope. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started crio-conmon-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started libcontainer container 9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677. Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.676218067Z" level=info msg="Created container d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6: openshift-cluster-node-tuning-operator/tuned-bjpgx/tuned" id=df1c6fd2-1205-4c63-b053-c56217a2ce16 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.677006038Z" level=info msg="Starting container: d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6" id=551dcc2d-12c5-4f89-9ea7-941f4bfa9585 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started libcontainer container 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065. Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.684400537Z" level=info msg="Created container 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-controller" id=673fc1b0-e530-4e29-8c6a-ac79024db355 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.685643881Z" level=info msg="Starting container: 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b" id=b1c7402b-6717-40a3-b577-444a9c740812 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.714058071Z" level=info msg="Started container" PID=2301 containerID=893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-controller id=b1c7402b-6717-40a3-b577-444a9c740812 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586 Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.714750299Z" level=info msg="Started container" PID=2307 containerID=d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6 description=openshift-cluster-node-tuning-operator/tuned-bjpgx/tuned id=551dcc2d-12c5-4f89-9ea7-941f4bfa9585 name=/runtime.v1.RuntimeService/StartContainer sandboxID=528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05 Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.727848879Z" level=info msg="Created container 9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677: openshift-monitoring/node-exporter-hw8fk/init-textfile" id=7766ef6a-56e7-4ff5-8bf4-2c20174f312f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.728521749Z" level=info msg="Starting container: 9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677" id=ae78aaa0-9193-4610-b78b-cac2e98c4b60 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.730135378Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=7926b1bf-3085-4601-bce1-25898587062e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.730449937Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7926b1bf-3085-4601-bce1-25898587062e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.731441944Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=e0e110ee-5291-40d5-b2e5-536b545100cc name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.731692284Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e0e110ee-5291-40d5-b2e5-536b545100cc name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.732538658Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-acl-logging" id=9d2caad1-c7a1-45ee-80fb-036d52a8b3c0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.732644692Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.740829577Z" level=info msg="Created container 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065: openshift-machine-config-operator/machine-config-daemon-d5wlc/machine-config-daemon" id=8e45db9b-1ae4-4050-b995-e68b6d566f96 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.741239694Z" level=info msg="Starting container: 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065" id=a829e30f-0010-43af-9ce0-441a568561b0 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.743146673Z" level=info msg="Started container" PID=2336 containerID=9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677 description=openshift-monitoring/node-exporter-hw8fk/init-textfile id=ae78aaa0-9193-4610-b78b-cac2e98c4b60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.757174617Z" level=info msg="Started container" PID=2377 containerID=66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065 description=openshift-machine-config-operator/machine-config-daemon-d5wlc/machine-config-daemon id=a829e30f-0010-43af-9ce0-441a568561b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809 Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started crio-conmon-99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be.scope. Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.771847875Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=792f1dca-68f0-4127-92d5-9787f24d327a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.772151755Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=792f1dca-68f0-4127-92d5-9787f24d327a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.773778278Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=9634bce8-8500-4efb-a27d-bc3ba23eace3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.773945559Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9634bce8-8500-4efb-a27d-bc3ba23eace3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.775101661Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-d5wlc/oauth-proxy" id=6537999a-2a38-4196-8072-1b8cd6d1f6cd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.775206430Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started libcontainer container 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started crio-conmon-b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887.scope. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started libcontainer container b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887. Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.856843562Z" level=info msg="Created container 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-acl-logging" id=9d2caad1-c7a1-45ee-80fb-036d52a8b3c0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.857737507Z" level=info msg="Starting container: 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be" id=4be8cffe-7d3f-4518-a848-b9c05bcc6a0d name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.870945055Z" level=info msg="Started container" PID=2450 containerID=99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-acl-logging id=4be8cffe-7d3f-4518-a848-b9c05bcc6a0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586 Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.901095765Z" level=info msg="Created container b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887: openshift-machine-config-operator/machine-config-daemon-d5wlc/oauth-proxy" id=6537999a-2a38-4196-8072-1b8cd6d1f6cd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.901346695Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=5a373d10-7c7a-4335-81ec-4ecb9e697cad name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.901610215Z" level=info msg="Starting container: b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887" id=be79a19c-c53c-4650-a5be-4f0182c8c348 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.901740693Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5a373d10-7c7a-4335-81ec-4ecb9e697cad name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.902568433Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=01f352cb-b294-4dc1-ad49-6725ffb6ee29 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.902902073Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=01f352cb-b294-4dc1-ad49-6725ffb6ee29 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.903970845Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy" id=457314c8-8908-4202-b24e-427a65bdacf4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.904132823Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:47.925002106Z" level=info msg="Started container" PID=2488 containerID=b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887 description=openshift-machine-config-operator/machine-config-daemon-d5wlc/oauth-proxy id=be79a19c-c53c-4650-a5be-4f0182c8c348 name=/runtime.v1.RuntimeService/StartContainer sandboxID=948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809 Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: crio-9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677.scope: Succeeded. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: crio-9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677.scope: Consumed 107ms CPU time Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: crio-conmon-9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677.scope: Succeeded. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: crio-conmon-9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677.scope: Consumed 26ms CPU time Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started crio-conmon-db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542.scope. Feb 23 16:32:47 ip-10-0-136-68 systemd[1]: Started libcontainer container db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542. Feb 23 16:32:48 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00066|bridge|INFO|bridge br-ex: added interface patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int on port 2 Feb 23 16:32:48 ip-10-0-136-68 NetworkManager[1147]: [1677169968.0329] manager: (patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24) Feb 23 16:32:48 ip-10-0-136-68 NetworkManager[1147]: [1677169968.0337] manager: (patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25) Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.102653034Z" level=info msg="Created container db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy" id=457314c8-8908-4202-b24e-427a65bdacf4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.105397712Z" level=info msg="Starting container: db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542" id=d2b16d5f-6630-4b3e-a4a4-fc891f4466aa name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.133138 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wdtzq" event=&{ID:ecd261a9-4d88-4e3d-aa47-803a685b6569 Type:ContainerStarted Data:ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.133377 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerStarted Data:7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.138906 2112 generic.go:296] "Generic (PLEG): container finished" podID=75f4efab-251e-4aa5-97d6-4a2a27025ae1 containerID="9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677" exitCode=0 Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.138964 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerDied Data:9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.138988 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerStarted Data:fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f} Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.140034697Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=541cb2e0-66ef-464d-9f0f-a554e4cde06a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.140364668Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05b7e8a1fbf3debab1b6ffc89b3540da9556cf7f25a65af04bd4766ad373fac6,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613],Size_:332676464,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=541cb2e0-66ef-464d-9f0f-a554e4cde06a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.142048887Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613" id=41d54209-024a-4055-ae38-5638e8ee1549 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.142436 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.142558 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.142581 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586} Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.142546564Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05b7e8a1fbf3debab1b6ffc89b3540da9556cf7f25a65af04bd4766ad373fac6,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:470c222e4c921794e900cd92995b11e0ac46448568333bd61346580285d28613],Size_:332676464,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=41d54209-024a-4055-ae38-5638e8ee1549 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.144863464Z" level=info msg="Creating container: openshift-monitoring/node-exporter-hw8fk/node-exporter" id=004e11d4-a3cf-43a4-af25-9b22d5558727 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.146082362Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.153689 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pgc9j" event=&{ID:507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 Type:ContainerStarted Data:e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68} Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.160970169Z" level=info msg="Started container" PID=2631 containerID=db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542 description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy id=d2b16d5f-6630-4b3e-a4a4-fc891f4466aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586 Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.163670 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerStarted Data:9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.164540 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" event=&{ID:07267a40-e316-4a88-91a5-11bc06672f23 Type:ContainerStarted Data:d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.164570 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" event=&{ID:07267a40-e316-4a88-91a5-11bc06672f23 Type:ContainerStarted Data:528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.168626 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerStarted Data:b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.168662 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerStarted Data:66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.168679 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerStarted Data:948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809} Feb 23 16:32:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:48.174856 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gr76d" event=&{ID:ffd2cee3-1bae-4941-8015-2b3ade383d85 Type:ContainerStarted Data:cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6} Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.197343244Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=8f5b9538-89df-4e0d-867a-248251afed9d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.197533818Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8f5b9538-89df-4e0d-867a-248251afed9d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.199022809Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=32715441-9beb-4138-94e7-64fa9f6911d6 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.199292001Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=32715441-9beb-4138-94e7-64fa9f6911d6 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.201572807Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy-ovn-metrics" id=76514496-4fc2-4cb1-ba17-72e2d5bacd47 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.201744292Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264.scope. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 16:32:48 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018.scope. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1.scope. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5.scope. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2.scope. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510.scope. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4.scope. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: run-runc-24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1-runc.xnbWzb.mount: Succeeded. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.614214064Z" level=info msg="Created container 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264: openshift-monitoring/node-exporter-hw8fk/node-exporter" id=004e11d4-a3cf-43a4-af25-9b22d5558727 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.615395136Z" level=info msg="Starting container: 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264" id=07bb8ab2-2a05-4788-8670-ba247a7ed684 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.629805641Z" level=info msg="Started container" PID=2773 containerID=39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264 description=openshift-monitoring/node-exporter-hw8fk/node-exporter id=07bb8ab2-2a05-4788-8670-ba247a7ed684 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.650451114Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=4d896ebd-611f-487b-bf92-6b3973319356 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.650644399Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4d896ebd-611f-487b-bf92-6b3973319356 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.651409513Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=236602ed-43fc-40d1-81d9-b1f5822d7258 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.651553404Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=236602ed-43fc-40d1-81d9-b1f5822d7258 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.652584873Z" level=info msg="Creating container: openshift-monitoring/node-exporter-hw8fk/kube-rbac-proxy" id=9bd41f73-7911-4c1c-9306-c18db6cc93bc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.652723954Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:48 ip-10-0-136-68 rpm-ostree[2896]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 16:32:48 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.147' (uid=0 pid=2896 comm="/usr/bin/rpm-ostree start-daemon " label="system_u:system_r:install_t:s0") Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Starting Authorization Manager... Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9.scope. Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.727786896Z" level=info msg="Created container c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-driver" id=a543b596-e68a-4313-8f77-adbad496d7a7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.728312285Z" level=info msg="Starting container: c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2" id=85d965a8-749b-4d8c-8673-d8101c5b40bc name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9. Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.743072113Z" level=info msg="Created container 4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5: openshift-multus/multus-gr76d/kube-multus" id=249cb0a2-d074-4757-8f1a-6d2da180a022 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.744814511Z" level=info msg="Starting container: 4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5" id=00fa9d07-2181-4eba-ba22-0958127613e1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 polkitd[2931]: Started polkitd version 0.115 Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.750000011Z" level=info msg="Created container 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018: openshift-dns/node-resolver-pgc9j/dns-node-resolver" id=659ad71e-4e0e-4db3-8e36-1d53d73dacbe name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.750916743Z" level=info msg="Starting container: 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018" id=30c031cc-6a63-46c1-8e0e-f886848fabe0 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.762618158Z" level=info msg="Created container feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4: openshift-image-registry/node-ca-wdtzq/node-ca" id=53e1661b-6024-444b-9f9f-4aa79fdb6d38 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.763088630Z" level=info msg="Starting container: feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4" id=742f9207-20e9-47c0-b72f-a239d959b2d9 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.769638741Z" level=info msg="Started container" PID=2890 containerID=4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5 description=openshift-multus/multus-gr76d/kube-multus id=00fa9d07-2181-4eba-ba22-0958127613e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6 Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.770742355Z" level=info msg="Created container 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy-ovn-metrics" id=76514496-4fc2-4cb1-ba17-72e2d5bacd47 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.771083505Z" level=info msg="Starting container: 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1" id=b427b950-1ca0-4957-8ebb-6b4c4682d1b3 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.773539159Z" level=info msg="Created container 0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510: openshift-multus/multus-additional-cni-plugins-p9nj2/egress-router-binary-copy" id=be94a14e-0d49-498a-b1cf-d5b8ebbbf219 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.774227236Z" level=info msg="Starting container: 0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510" id=36f243b3-4acf-4e90-ba9c-ad56c0b747fc name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.777210699Z" level=info msg="Started container" PID=2855 containerID=1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018 description=openshift-dns/node-resolver-pgc9j/dns-node-resolver id=30c031cc-6a63-46c1-8e0e-f886848fabe0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68 Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.778217741Z" level=info msg="Started container" PID=2865 containerID=c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-driver id=85d965a8-749b-4d8c-8673-d8101c5b40bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16 Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.780654873Z" level=info msg="Started container" PID=2869 containerID=feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4 description=openshift-image-registry/node-ca-wdtzq/node-ca id=742f9207-20e9-47c0-b72f-a239d959b2d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.783957152Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_185c6343-a0f0-4d01-80e2-2b421c65537f\"" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.796876997Z" level=info msg="Started container" PID=2878 containerID=24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1 description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy-ovn-metrics id=b427b950-1ca0-4957-8ebb-6b4c4682d1b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586 Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.804076365Z" level=info msg="Started container" PID=2870 containerID=0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510 description=openshift-multus/multus-additional-cni-plugins-p9nj2/egress-router-binary-copy id=36f243b3-4acf-4e90-ba9c-ad56c0b747fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.810078062Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629" id=865445be-8d98-420b-8a30-f8ce139f6f69 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.810309614Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2eecc69f1e9928cfda977963566305773afcd02e4e8704a5b84734739604a8ea,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629],Size_:366234876,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=865445be-8d98-420b-8a30-f8ce139f6f69 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.816391774Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629" id=ef8e0af9-e373-46cb-b90e-c0355f243497 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.816829090Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2eecc69f1e9928cfda977963566305773afcd02e4e8704a5b84734739604a8ea,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:211d1573308c92367f9e7e0e6ffb06257748dc81666b0629e8c210a172f13629],Size_:366234876,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ef8e0af9-e373-46cb-b90e-c0355f243497 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.818608891Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-node-driver-registrar" id=7857f122-48bb-4227-8eee-83d2d274f65b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.818786277Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.830257282Z" level=info msg="Created container 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9: openshift-monitoring/node-exporter-hw8fk/kube-rbac-proxy" id=9bd41f73-7911-4c1c-9306-c18db6cc93bc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.830740879Z" level=info msg="Starting container: 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9" id=f139a9a7-18bf-43be-8f58-42b9c5bb6d5f name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.836301698Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=afdd8417-6d0d-4f0a-8986-b389935a5618 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.838605161Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=afdd8417-6d0d-4f0a-8986-b389935a5618 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.838843074Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.838918902Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.838996132Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_61127c30-7aea-4ae8-9ff5-23b20e631968\"" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.839487286Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521" id=2ba1c1e1-ab50-46ed-a5fd-5019ff557dc5 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.839897231Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9b22b7f24f1449861f254aea709cfcb21aecd8231d265d09aee8f99af215aa53,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:22ae12dfd7dcfb1b7a929bd4eba3405464b9e8439f62719569c136b6f3388521],Size_:1123099489,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2ba1c1e1-ab50-46ed-a5fd-5019ff557dc5 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.843431178Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovnkube-node" id=10b17e33-53b5-49bb-b909-79a2257ac407 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.843535229Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.856540744Z" level=info msg="Started container" PID=2969 containerID=1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9 description=openshift-monitoring/node-exporter-hw8fk/kube-rbac-proxy id=f139a9a7-18bf-43be-8f58-42b9c5bb6d5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f Feb 23 16:32:48 ip-10-0-136-68 polkitd[2931]: Loading rules from directory /etc/polkit-1/rules.d Feb 23 16:32:48 ip-10-0-136-68 polkitd[2931]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 23 16:32:48 ip-10-0-136-68 polkitd[2931]: Finished loading, compiling and executing 3 rules Feb 23 16:32:48 ip-10-0-136-68 dbus-daemon[903]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started Authorization Manager. Feb 23 16:32:48 ip-10-0-136-68 polkitd[2931]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.865977981Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.866001118Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923.scope. Feb 23 16:32:48 ip-10-0-136-68 rpm-ostree[2896]: In idle state; will auto-exit in 63 seconds Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.939781235Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/egress-router\"" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.954215911Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.954499966Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.954529634Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_61127c30-7aea-4ae8-9ff5-23b20e631968\"" Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: crio-conmon-0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510.scope: Succeeded. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: crio-conmon-0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510.scope: Consumed 26ms CPU time Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: crio-0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510.scope: Succeeded. Feb 23 16:32:48 ip-10-0-136-68 systemd[1]: crio-0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510.scope: Consumed 70ms CPU time Feb 23 16:32:48 ip-10-0-136-68 rpm-ostree[2896]: client(id:cli dbus:1.154 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0) added; new total=1 Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.972266346Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Feb 23 16:32:48 ip-10-0-136-68 rpm-ostree[2896]: client(id:cli dbus:1.154 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0) vanished; remaining=0 Feb 23 16:32:48 ip-10-0-136-68 rpm-ostree[2896]: In idle state; will auto-exit in 63 seconds Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.992513336Z" level=info msg="Created container 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovnkube-node" id=10b17e33-53b5-49bb-b909-79a2257ac407 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.993805266Z" level=info msg="Starting container: 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923" id=421820b3-c66d-4659-9aea-863e4ed49ad1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:48.999905908Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.000016341Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.000051350Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_185c6343-a0f0-4d01-80e2-2b421c65537f\"" Feb 23 16:32:49 ip-10-0-136-68 systemd[1]: Started crio-conmon-7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511.scope. Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.014616958Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.019012828Z" level=info msg="Started container" PID=3165 containerID=97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923 description=openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovnkube-node id=421820b3-c66d-4659-9aea-863e4ed49ad1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586 Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.040634476Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.041370502Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.041393462Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 systemd[1]: Started libcontainer container 7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511. Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.059957752Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.060003493Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.060019498Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.072068528Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.072094716Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.072109855Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.081729250Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.081754722Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.081784397Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.096132072Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.096152927Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.096168371Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.107106147Z" level=info msg="Created container 7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-node-driver-registrar" id=7857f122-48bb-4227-8eee-83d2d274f65b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.107563541Z" level=info msg="Starting container: 7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511" id=50f9eeaa-edcb-41e8-93e7-0a13eda8e052 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.108639836Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.109123937Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.109146772Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.126337310Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.126511896Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.128127209Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.134388506Z" level=info msg="Started container" PID=3285 containerID=7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-node-driver-registrar id=50f9eeaa-edcb-41e8-93e7-0a13eda8e052 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16 Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.140016084Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.140040221Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.140055950Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.150490150Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.150515526Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.150531019Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.160475677Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.160501335Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.160521271Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.162573106Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d" id=417a7a91-124f-45b8-913e-36da4d006112 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.162808834Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40bae28f97f8f229b5a02594c733e50dcbce35d0113ede4c94c66a0320c493a8,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d],Size_:364222717,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=417a7a91-124f-45b8-913e-36da4d006112 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.164600107Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d" id=810b1c6c-b55c-4dd0-82db-b55619767850 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.164947616Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:40bae28f97f8f229b5a02594c733e50dcbce35d0113ede4c94c66a0320c493a8,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21bbb5586c93ddc2b847c2a4971c6e0264ab6ea641b4d4079c863ce4f87b3b3d],Size_:364222717,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=810b1c6c-b55c-4dd0-82db-b55619767850 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.166965056Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-liveness-probe" id=86fd7a0f-8bad-404d-a7d2-78b8b06a6ce4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.167072259Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.172969889Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.172995444Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.173009797Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.177803 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pgc9j" event=&{ID:507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 Type:ContainerStarted Data:1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018} Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.182212 2112 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510" exitCode=0 Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.182632 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510} Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.183198573Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63" id=6a7c9e22-fb60-42e7-b9d7-74e01632f540 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.184635 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerStarted Data:7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511} Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.184696 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerStarted Data:c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2} Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.185523 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wdtzq" event=&{ID:ecd261a9-4d88-4e3d-aa47-803a685b6569 Type:ContainerStarted Data:feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4} Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.185733378Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.185750581Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.185760016Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.190190 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542} Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.190219 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923} Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.190233 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerStarted Data:24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1} Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.192596 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gr76d" event=&{ID:ffd2cee3-1bae-4941-8015-2b3ade383d85 Type:ContainerStarted Data:4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5} Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.196416 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerStarted Data:1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9} Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.196521 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerStarted Data:39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264} Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.197631532Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.197649950Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.197689514Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.206622288Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.206641238Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.206650816Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.215225726Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.215244616Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.215254376Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.224339723Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.224363252Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.224373306Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.230774122Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:329f0052933f8d4a512b68f715fe001d1d60ee1ef6897dd333ea86e4fd331fc7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63],Size_:574266870,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6a7c9e22-fb60-42e7-b9d7-74e01632f540 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.231478187Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63" id=a310dc60-be85-40bd-95cb-4fd713d86a4b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.231622152Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:329f0052933f8d4a512b68f715fe001d1d60ee1ef6897dd333ea86e4fd331fc7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4eebf870d0fd0d64a7a9238dc6121ad2810e000fbfd3effa05b637800405f63],Size_:574266870,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a310dc60-be85-40bd-95cb-4fd713d86a4b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.234505600Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.234525066Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.234539749Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.234834718Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/cni-plugins" id=81e9eb5a-c5a9-44ca-8a4e-b64d4bbac7e0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.234938753Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.243031487Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.243070457Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.243084950Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.252436677Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.252454920Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.252467414Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.274169481Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.274201363Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.274216336Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.288562977Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.288593714Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.288606295Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.300238138Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.300266429Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.300282061Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.315070806Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.315095659Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.315109029Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.329446100Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.329470351Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.329483469Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 systemd[1]: Started crio-conmon-37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883.scope. Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.350640826Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.350826452Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.350856044Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.362088728Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.362210224Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.362235690Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 systemd[1]: Started libcontainer container 37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883. Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.377322569Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.377350619Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.377368908Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.397496591Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.397525654Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.397542812Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.414553005Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.414701312Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.414775661Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.428289921Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.428328644Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.428348778Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.441927930Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.442117397Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.442160690Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 systemd[1]: Started crio-conmon-8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd.scope. Feb 23 16:32:49 ip-10-0-136-68 systemd[1]: Started libcontainer container 8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd. Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.465789854Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.465813840Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.465830893Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.470626905Z" level=info msg="Created container 37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-liveness-probe" id=86fd7a0f-8bad-404d-a7d2-78b8b06a6ce4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.471060416Z" level=info msg="Starting container: 37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883" id=c58ae6dc-4673-41da-baf3-49318677aa33 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.476917914Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.477032292Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.477060949Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.487203788Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.487223927Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.487234227Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.496191457Z" level=info msg="Started container" PID=3559 containerID=37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-liveness-probe id=c58ae6dc-4673-41da-baf3-49318677aa33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16 Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.497382494Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.497406554Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.497420781Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.513245016Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.513267623Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.513280463Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.526471977Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.526524649Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.526542332Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 root[3665]: machine-config-daemon[2377]: Starting to manage node: ip-10-0-136-68.us-west-2.compute.internal Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.538570043Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.538601221Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.538621590Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.547712638Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.547740410Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.547756297Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.548307850Z" level=info msg="Created container 8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd: openshift-multus/multus-additional-cni-plugins-p9nj2/cni-plugins" id=81e9eb5a-c5a9-44ca-8a4e-b64d4bbac7e0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.548828842Z" level=info msg="Starting container: 8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd" id=60488d4f-3a9a-49fd-a7d9-c1aa857ac85c name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.558052238Z" level=info msg="Started container" PID=3619 containerID=8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd description=openshift-multus/multus-additional-cni-plugins-p9nj2/cni-plugins id=60488d4f-3a9a-49fd-a7d9-c1aa857ac85c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.559612092Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.559634787Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.559651295Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.571578826Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.571711850Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.571787690Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 rpm-ostree[2896]: client(id:machine-config-operator dbus:1.161 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0) added; new total=1 Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.591820142Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.591840163Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.591851196Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.599693080Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.599716630Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.599735727Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_51ab1e3a-f282-4875-a71f-fd16460e2836\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.607367672Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.607390413Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.607401854Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 rpm-ostree[2896]: client(id:machine-config-operator dbus:1.161 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0) vanished; remaining=0 Feb 23 16:32:49 ip-10-0-136-68 rpm-ostree[2896]: In idle state; will auto-exit in 64 seconds Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.616245243Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.616277869Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.616290626Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.624262721Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.624281084Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.624311372Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.635273636Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.635293606Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.635303458Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.643153599Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.643173808Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.643186799Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.651062487Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.651081150Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.651090160Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.658947018Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:49.658965259Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:49.726012 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:50 ip-10-0-136-68 systemd[1]: run-runc-37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883-runc.pIaaaE.mount: Succeeded. Feb 23 16:32:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:50.199367 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerStarted Data:8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd} Feb 23 16:32:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:50.200914 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerStarted Data:37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883} Feb 23 16:32:50 ip-10-0-136-68 rpm-ostree[2896]: client(id:machine-config-operator dbus:1.162 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0) added; new total=1 Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.690422037Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bandwidth\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.705169482Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.705230679Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.705251555Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bridge\"" Feb 23 16:32:50 ip-10-0-136-68 systemd[1]: crio-8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd.scope: Succeeded. Feb 23 16:32:50 ip-10-0-136-68 systemd[1]: crio-8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd.scope: Consumed 85ms CPU time Feb 23 16:32:50 ip-10-0-136-68 systemd[1]: crio-conmon-8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd.scope: Succeeded. Feb 23 16:32:50 ip-10-0-136-68 systemd[1]: crio-conmon-8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd.scope: Consumed 24ms CPU time Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.718216535Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.718435706Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.718457688Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/dhcp\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.727952057Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.727979171Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.727995408Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/firewall\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.736827368Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.736846745Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.736857462Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/host-device\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.745176270Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.745196348Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.745205403Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/host-local\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.753224393Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.753244079Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.753253590Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ipvlan\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.761930089Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.761950433Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.761960600Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/loopback\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.769965122Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.769989563Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.770004072Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/macvlan\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.778166464Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.778185447Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.778194932Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/portmap\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.786231794Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.786253226Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.786262529Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ptp\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.794960524Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.794981011Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.794991487Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/sbr\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.803061270Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.803082884Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.803110467Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/static\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.811231799Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.811251466Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.811260830Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/tuning\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.819893640Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.819932506Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.819948694Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/vlan\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.827476129Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.827495322Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.827503975Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/vrf\"" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.834864735Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.834885306Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:50.834895495Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_51ab1e3a-f282-4875-a71f-fd16460e2836\"" Feb 23 16:32:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:50.958853 2112 plugin_watcher.go:203] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Feb 23 16:32:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00067|memory|INFO|189344 kB peak resident set size after 10.1 seconds Feb 23 16:32:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00068|memory|INFO|handlers:4 idl-cells:701 ofconns:3 ports:11 revalidators:2 rules:1868 udpif keys:7 Feb 23 16:32:51 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:51.122208 2112 reconciler.go:164] "OperationExecutor.RegisterPlugin started" plugin={SocketPath:/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock Timestamp:2023-02-23 16:32:50.958874752 +0000 UTC m=+5.916039213 Handler: Name:} Feb 23 16:32:51 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:51.123886 2112 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Feb 23 16:32:51 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:51.123914 2112 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Feb 23 16:32:51 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:51.203392 2112 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd" exitCode=0 Feb 23 16:32:51 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:51.203422 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd} Feb 23 16:32:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:51.204041364Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9" id=ba6cd22c-e104-4e38-a7d6-8453ed67af8d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:51.204265287Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4863f207d59fce067b864451f5c7b0dca685f5a63af45f9e51cbee61b04172bd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9],Size_:352688251,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ba6cd22c-e104-4e38-a7d6-8453ed67af8d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:51.209811039Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9" id=a4b6b8c6-5d12-4e82-bb34-70370615287f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:51.210058591Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4863f207d59fce067b864451f5c7b0dca685f5a63af45f9e51cbee61b04172bd,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc563779400e9013d40b1e6762a57e5177d0e62e28ecc372d5265e30a70f64c9],Size_:352688251,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a4b6b8c6-5d12-4e82-bb34-70370615287f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:51.210588541Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/bond-cni-plugin" id=68280af2-c39a-4d20-bbc5-085ad32957e7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:51.210699262Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:52 ip-10-0-136-68 rpm-ostree[2896]: Locked sysroot Feb 23 16:32:52 ip-10-0-136-68 rpm-ostree[2896]: Initiated txn Cleanup for client(id:machine-config-operator dbus:1.162 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0): /org/projectatomic/rpmostree1/rhcos Feb 23 16:32:52 ip-10-0-136-68 rpm-ostree[2896]: Process [pid: 3796 uid: 0 unit: crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope] connected to transaction progress Feb 23 16:32:52 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 16:32:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2.scope. Feb 23 16:32:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2. Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.303478599Z" level=info msg="Created container 71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2: openshift-multus/multus-additional-cni-plugins-p9nj2/bond-cni-plugin" id=68280af2-c39a-4d20-bbc5-085ad32957e7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.304079051Z" level=info msg="Starting container: 71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2" id=6f6286c0-3882-42f1-a45a-5f7609270e70 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.312608401Z" level=info msg="Started container" PID=3961 containerID=71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2 description=openshift-multus/multus-additional-cni-plugins-p9nj2/bond-cni-plugin id=6f6286c0-3882-42f1-a45a-5f7609270e70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.319098244Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_f8efd164-53ed-4d2d-b6a6-f9c4d6d86e0a\"" Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.330559330Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.330581142Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.373834799Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bond\"" Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.385014974Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.385040508Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:52.385057439Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_f8efd164-53ed-4d2d-b6a6-f9c4d6d86e0a\"" Feb 23 16:32:52 ip-10-0-136-68 systemd[1]: crio-71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2.scope: Succeeded. Feb 23 16:32:52 ip-10-0-136-68 systemd[1]: crio-71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2.scope: Consumed 38ms CPU time Feb 23 16:32:52 ip-10-0-136-68 systemd[1]: crio-conmon-71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2.scope: Succeeded. Feb 23 16:32:52 ip-10-0-136-68 systemd[1]: crio-conmon-71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2.scope: Consumed 25ms CPU time Feb 23 16:32:52 ip-10-0-136-68 NetworkManager[1147]: [1677169972.8897] device (ovn-k8s-mp0): carrier: link connected Feb 23 16:32:53 ip-10-0-136-68 rpm-ostree[2896]: Bootloader updated; bootconfig swap: yes; bootversion: boot.1.1, deployment count change: -1 Feb 23 16:32:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:53.211281 2112 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2" exitCode=0 Feb 23 16:32:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:53.211374 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2} Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.212035044Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903" id=9fd7f079-eaee-44f2-aaba-fd2e9b2e9c74 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.212304540Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8e340e90f6e3a45f51b38ed888230331ab048c37137d84bb37e5141844371f76,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903],Size_:317193941,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9fd7f079-eaee-44f2-aaba-fd2e9b2e9c74 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.212889525Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903" id=b8fc5afe-7393-4eb3-afe9-d8008dc39efc name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.213013115Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8e340e90f6e3a45f51b38ed888230331ab048c37137d84bb37e5141844371f76,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67fc5e92be6392fc47e7343940b712ef5e85ec5edb5a7d64d0fbf30ebe4b3903],Size_:317193941,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b8fc5afe-7393-4eb3-afe9-d8008dc39efc name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.213691118Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/routeoverride-cni" id=2565f38e-efe8-44fa-b1b8-1ecf224f7951 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.213802269Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: Started crio-conmon-8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122.scope. Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: run-runc-8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122-runc.F2oiy4.mount: Succeeded. Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: Started libcontainer container 8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122. Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.417770738Z" level=info msg="Created container 8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122: openshift-multus/multus-additional-cni-plugins-p9nj2/routeoverride-cni" id=2565f38e-efe8-44fa-b1b8-1ecf224f7951 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.419368911Z" level=info msg="Starting container: 8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122" id=769c070d-de63-4245-8bda-86d210e9a41e name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.429394474Z" level=info msg="Started container" PID=4119 containerID=8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122 description=openshift-multus/multus-additional-cni-plugins-p9nj2/routeoverride-cni id=769c070d-de63-4245-8bda-86d210e9a41e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.435356090Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_0407e5f5-0609-453b-a4ee-2cbc9dfaa6d1\"" Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.450446467Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.450479590Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.455532360Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/route-override\"" Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.466606334Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.466633663Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:53.466650744Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_0407e5f5-0609-453b-a4ee-2cbc9dfaa6d1\"" Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: crio-8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122.scope: Succeeded. Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: crio-8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122.scope: Consumed 42ms CPU time Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: crio-conmon-8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122.scope: Succeeded. Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: crio-conmon-8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122.scope: Consumed 22ms CPU time Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: NetworkManager-dispatcher.service: Succeeded. Feb 23 16:32:53 ip-10-0-136-68 systemd[1]: NetworkManager-dispatcher.service: Consumed 1.048s CPU time Feb 23 16:32:53 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00069|connmgr|INFO|br-ex<->unix#9: 28 flow_mods in the last 0 s (28 adds) Feb 23 16:32:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:54.214857 2112 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122" exitCode=0 Feb 23 16:32:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:54.214944 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122} Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.215645237Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=1a71fc8a-ee16-49cc-9540-baefbd4fff9e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.215917006Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a051c2cfc108e960dd12d60bc4ee074be58ba53de890a6f33ab7bada80d30890,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66],Size_:476595411,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1a71fc8a-ee16-49cc-9540-baefbd4fff9e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.216421179Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=c750f40f-0421-45c4-835b-d19e04881e0d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.216563587Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a051c2cfc108e960dd12d60bc4ee074be58ba53de890a6f33ab7bada80d30890,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66],Size_:476595411,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c750f40f-0421-45c4-835b-d19e04881e0d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.219212893Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni-bincopy" id=ec6e7fff-993e-4823-a465-4acd510b3e95 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.219321454Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:54 ip-10-0-136-68 systemd[1]: Started crio-conmon-535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038.scope. Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.339828233Z" level=info msg="CNI monitoring event REMOVE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 16:32:54 ip-10-0-136-68 systemd[1]: run-runc-535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038-runc.fwG24e.mount: Succeeded. Feb 23 16:32:54 ip-10-0-136-68 systemd[1]: Started libcontainer container 535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038. Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.361235224Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.361510886Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.361536644Z" level=info msg="CNI monitoring event CREATE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.379722987Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.379755504Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.379778946Z" level=info msg="CNI monitoring event WRITE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.392516459Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.392542318Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.392556918Z" level=info msg="CNI monitoring event CHMOD \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.439535913Z" level=info msg="Created container 535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni-bincopy" id=ec6e7fff-993e-4823-a465-4acd510b3e95 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.440399433Z" level=info msg="Starting container: 535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038" id=126125a0-b815-40b9-bbdd-73c99f5b407f name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.447790583Z" level=info msg="Started container" PID=4404 containerID=535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038 description=openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni-bincopy id=126125a0-b815-40b9-bbdd-73c99f5b407f name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.453951589Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_d34e3e08-a27e-4207-b6dd-7df05c27ba3e\"" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.482797718Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.482835400Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.600834614Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/whereabouts\"" Feb 23 16:32:54 ip-10-0-136-68 systemd-udevd[4483]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.6127] manager: (5fbd19a020a56df): new Veth device (/org/freedesktop/NetworkManager/Devices/26) Feb 23 16:32:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 5fbd19a020a56df: link is not ready Feb 23 16:32:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 16:32:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 16:32:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 5fbd19a020a56df: link becomes ready Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.6153] device (5fbd19a020a56df): carrier: link connected Feb 23 16:32:54 ip-10-0-136-68 systemd-udevd[4483]: Could not generate persistent MAC address for 5fbd19a020a56df: No such file or directory Feb 23 16:32:54 ip-10-0-136-68 systemd[1]: crio-535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038.scope: Succeeded. Feb 23 16:32:54 ip-10-0-136-68 systemd[1]: crio-535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038.scope: Consumed 75ms CPU time Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.6517] manager: (5fbd19a020a56df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27) Feb 23 16:32:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00070|bridge|INFO|bridge br-int: added interface 5fbd19a020a56df on port 8 Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.653229298Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.653396500Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.653419443Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_d34e3e08-a27e-4207-b6dd-7df05c27ba3e\"" Feb 23 16:32:54 ip-10-0-136-68 kernel: device 5fbd19a020a56df entered promiscuous mode Feb 23 16:32:54 ip-10-0-136-68 systemd[1]: crio-conmon-535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038.scope: Succeeded. Feb 23 16:32:54 ip-10-0-136-68 systemd[1]: crio-conmon-535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038.scope: Consumed 27ms CPU time Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: I0223 16:32:54.580612 4422 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: 2023-02-23T16:32:54Z [verbose] Add: openshift-multus:network-metrics-daemon-5hc5d:9cd26ba5-46e4-40b5-81e6-74079153d58d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"5fbd19a020a56df","mac":"16:85:ac:b6:68:85"},{"name":"eth0","mac":"0a:58:0a:81:02:03","sandbox":"/var/run/netns/199a84df-cf71-443f-81b3-b9ff5e18be9d"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.3/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: I0223 16:32:54.718203 2229 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-5hc5d", UID:"9cd26ba5-46e4-40b5-81e6-74079153d58d", APIVersion:"v1", ResourceVersion:"44948", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.3/23] from ovn-kubernetes Feb 23 16:32:54 ip-10-0-136-68 systemd-udevd[4522]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:54 ip-10-0-136-68 systemd-udevd[4522]: Could not generate persistent MAC address for 4072e0d3663d194: No such file or directory Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.7489] manager: (4072e0d3663d194): new Veth device (/org/freedesktop/NetworkManager/Devices/28) Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.7494] device (4072e0d3663d194): carrier: link connected Feb 23 16:32:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 4072e0d3663d194: link is not ready Feb 23 16:32:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 4072e0d3663d194: link becomes ready Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.755316753Z" level=info msg="Got pod network &{Name:network-metrics-daemon-5hc5d Namespace:openshift-multus ID:5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4 UID:9cd26ba5-46e4-40b5-81e6-74079153d58d NetNS:/var/run/netns/199a84df-cf71-443f-81b3-b9ff5e18be9d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.755526119Z" level=info msg="Checking pod openshift-multus_network-metrics-daemon-5hc5d for CNI network multus-cni-network (type=multus)" Feb 23 16:32:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00071|bridge|INFO|bridge br-int: added interface 4072e0d3663d194 on port 9 Feb 23 16:32:54 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:54.780737 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cd26ba5_46e4_40b5_81e6_74079153d58d.slice/crio-5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4.scope WatchSource:0}: Error finding container 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4: Status 404 returned error can't find the container with id 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4 Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.7847] manager: (4072e0d3663d194): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29) Feb 23 16:32:54 ip-10-0-136-68 kernel: device 4072e0d3663d194 entered promiscuous mode Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.814414552Z" level=info msg="Ran pod sandbox 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4 with infra container: openshift-multus/network-metrics-daemon-5hc5d/POD" id=29fa4548-fcb2-4fa5-a662-55744709066a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.821986983Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1" id=2b50e88d-2527-4f70-8db8-21eaa8d8345f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.822476531Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:cf970f9f630b6d1f93b0d1fe248cb85574ebbdcdf0eb41f96f3b817528af45c4,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1],Size_:385370431,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2b50e88d-2527-4f70-8db8-21eaa8d8345f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.824761173Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1" id=09fc4a49-02c8-44d1-bad6-e3d3255099dc name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.825002173Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:cf970f9f630b6d1f93b0d1fe248cb85574ebbdcdf0eb41f96f3b817528af45c4,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf28fab2b37250a8897dd2b133fdd73a1782045aec3bbc36cc87bc8c6ef0c7c1],Size_:385370431,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=09fc4a49-02c8-44d1-bad6-e3d3255099dc name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.826470873Z" level=info msg="Creating container: openshift-multus/network-metrics-daemon-5hc5d/network-metrics-daemon" id=83d1be60-05d4-4b11-82ad-7dcda517686a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.826583281Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:54.833120 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" Feb 23 16:32:54 ip-10-0-136-68 systemd-udevd[4562]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 5244fda541f5dc9: link is not ready Feb 23 16:32:54 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 5244fda541f5dc9: link becomes ready Feb 23 16:32:54 ip-10-0-136-68 systemd-udevd[4562]: Could not generate persistent MAC address for 5244fda541f5dc9: No such file or directory Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.8366] manager: (5244fda541f5dc9): new Veth device (/org/freedesktop/NetworkManager/Devices/30) Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.8374] device (5244fda541f5dc9): carrier: link connected Feb 23 16:32:54 ip-10-0-136-68 NetworkManager[1147]: [1677169974.8691] manager: (5244fda541f5dc9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31) Feb 23 16:32:54 ip-10-0-136-68 kernel: device 5244fda541f5dc9 entered promiscuous mode Feb 23 16:32:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00072|bridge|INFO|bridge br-int: added interface 5244fda541f5dc9 on port 10 Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: I0223 16:32:54.731449 4457 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: 2023-02-23T16:32:54Z [verbose] Add: openshift-ingress-canary:ingress-canary-p47qk:a704838c-aeb5-4709-b91c-2460423203a4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4072e0d3663d194","mac":"e2:d9:e3:71:0a:20"},{"name":"eth0","mac":"0a:58:0a:81:02:05","sandbox":"/var/run/netns/7c821687-d150-4ab8-9614-66d1ccd9f281"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.5/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: I0223 16:32:54.902552 2237 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-p47qk", UID:"a704838c-aeb5-4709-b91c-2460423203a4", APIVersion:"v1", ResourceVersion:"44953", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.5/23] from ovn-kubernetes Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.944970215Z" level=info msg="Got pod network &{Name:ingress-canary-p47qk Namespace:openshift-ingress-canary ID:4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4 UID:a704838c-aeb5-4709-b91c-2460423203a4 NetNS:/var/run/netns/7c821687-d150-4ab8-9614-66d1ccd9f281 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.945133419Z" level=info msg="Checking pod openshift-ingress-canary_ingress-canary-p47qk for CNI network multus-cni-network (type=multus)" Feb 23 16:32:54 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:54.949838 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda704838c_aeb5_4709_b91c_2460423203a4.slice/crio-4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4.scope WatchSource:0}: Error finding container 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4: Status 404 returned error can't find the container with id 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4 Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: I0223 16:32:54.821819 4470 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: 2023-02-23T16:32:54Z [verbose] Add: openshift-network-diagnostics:network-check-target-b2mxx:5acce570-9f3b-4dab-9fed-169a4c110f8c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"5244fda541f5dc9","mac":"5a:b3:1f:3e:f3:44"},{"name":"eth0","mac":"0a:58:0a:81:02:04","sandbox":"/var/run/netns/755b1556-e233-443e-89fd-f6504a4e73db"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.4/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: I0223 16:32:54.932960 2264 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-b2mxx", UID:"5acce570-9f3b-4dab-9fed-169a4c110f8c", APIVersion:"v1", ResourceVersion:"44968", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.4/23] from ovn-kubernetes Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.953687846Z" level=info msg="Got pod network &{Name:network-check-target-b2mxx Namespace:openshift-network-diagnostics ID:5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b UID:5acce570-9f3b-4dab-9fed-169a4c110f8c NetNS:/var/run/netns/755b1556-e233-443e-89fd-f6504a4e73db Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.953831893Z" level=info msg="Checking pod openshift-network-diagnostics_network-check-target-b2mxx for CNI network multus-cni-network (type=multus)" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.959919432Z" level=info msg="Ran pod sandbox 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4 with infra container: openshift-ingress-canary/ingress-canary-p47qk/POD" id=05f160f8-9003-4ed9-8cdd-3b38e3c1e346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:54 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:54.961557 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5acce570_9f3b_4dab_9fed_169a4c110f8c.slice/crio-5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b.scope WatchSource:0}: Error finding container 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b: Status 404 returned error can't find the container with id 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.962617772Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea" id=4fc619a7-19d6-43c9-8721-0e55c92323ea name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.963311965Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9fc5d3aadae42f5e9abc5ec66e804749d31c450fba1d3668b87deba226f99d0b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea],Size_:431318980,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4fc619a7-19d6-43c9-8721-0e55c92323ea name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.967105139Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea" id=8bf70338-d4b6-4aa8-ac72-4f5dcd51840d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.968194236Z" level=info msg="Ran pod sandbox 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b with infra container: openshift-network-diagnostics/network-check-target-b2mxx/POD" id=e4337cf1-d02f-4fbb-9df4-d305063238c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.969434755Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9fc5d3aadae42f5e9abc5ec66e804749d31c450fba1d3668b87deba226f99d0b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a30a6f76b38acd66311d81d93514ebc034c2177a187e3a5381bbce2c30775ea],Size_:431318980,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8bf70338-d4b6-4aa8-ac72-4f5dcd51840d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.972503990Z" level=info msg="Creating container: openshift-ingress-canary/ingress-canary-p47qk/serve-healthcheck-canary" id=fa1c5f55-7281-40c0-b81e-8c326c5bc0e7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.972709004Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.973001932Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60" id=9c4d58e6-5437-4552-8de5-797fa4179618 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.973356799Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fbfabc25c264657111b70d2537c63f40bd1221c9fa96f133a4ea4c49f2c732ee,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60],Size_:512530138,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9c4d58e6-5437-4552-8de5-797fa4179618 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.976620860Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60" id=fb22dac8-0af7-4600-a427-5816a993b7a7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.976870116Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fbfabc25c264657111b70d2537c63f40bd1221c9fa96f133a4ea4c49f2c732ee,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60],Size_:512530138,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=fb22dac8-0af7-4600-a427-5816a993b7a7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.978370765Z" level=info msg="Creating container: openshift-network-diagnostics/network-check-target-b2mxx/network-check-target-container" id=6e6a4db4-5e20-431e-a172-1a85535c89ed name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:54.978457334Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started crio-conmon-fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5.scope. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started libcontainer container fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started crio-conmon-204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef.scope. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started crio-conmon-04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4.scope. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started libcontainer container 204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started libcontainer container 04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4. Feb 23 16:32:55 ip-10-0-136-68 rpm-ostree[2896]: Librepo version: 1.14.2 with CURL_GLOBAL_ACK_EINTR support (libcurl/7.61.1 OpenSSL/1.1.1k zlib/1.2.11 brotli/1.0.6 libidn2/2.2.0 libpsl/0.20.2 (+libidn2/2.2.0) libssh/0.9.6/openssl/zlib nghttp2/1.33.0) Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: 2023-02-23T16:32:54Z [verbose] Del: openshift-dns:dns-default-h4ftg:c072a683-1031-40cb-a1bc-1dac71bca46b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: I0223 16:32:55.137597 4580 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.191332921Z" level=info msg="Successfully cleaned up network for pod ff0a102645f986a9a470ba4bb4d46e818f68b24e5af270b266a49a64d8462689" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.191363548Z" level=info msg="cleanup sandbox network" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.192271025Z" level=info msg="Got pod network &{Name:dns-default-h4ftg Namespace:openshift-dns ID:cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e UID:c072a683-1031-40cb-a1bc-1dac71bca46b NetNS:/var/run/netns/9dd40d86-4a83-40d4-b955-0eaf699cad8c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.192298017Z" level=info msg="Adding pod openshift-dns_dns-default-h4ftg to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.192355308Z" level=info msg="Got pod network &{Name:network-check-target-b2mxx Namespace:openshift-network-diagnostics ID:0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c UID:5acce570-9f3b-4dab-9fed-169a4c110f8c NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.192489568Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-b2mxx from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:32:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:55.221008 2112 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038" exitCode=0 Feb 23 16:32:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:55.221253 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038} Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.222591894Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=76e2071d-99e6-4b2b-b9f3-186e84ce6edf name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:55.222991 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-b2mxx" event=&{ID:5acce570-9f3b-4dab-9fed-169a4c110f8c Type:ContainerStarted Data:5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b} Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.223166737Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a051c2cfc108e960dd12d60bc4ee074be58ba53de890a6f33ab7bada80d30890,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66],Size_:476595411,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=76e2071d-99e6-4b2b-b9f3-186e84ce6edf name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:55.226404 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerStarted Data:5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4} Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.226776480Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66" id=b9288fde-4f4b-4dca-bf0e-f648f2a5cca9 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.227169098Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a051c2cfc108e960dd12d60bc4ee074be58ba53de890a6f33ab7bada80d30890,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5edef2b057f0fb062044d99c8bd3906443525e1bfe91574f66f1479ff5e26f66],Size_:476595411,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b9288fde-4f4b-4dca-bf0e-f648f2a5cca9 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.228312894Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni" id=0278f1a9-aaf9-475c-b023-bbcbaf3a5fa4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.228415805Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:55.229068 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p47qk" event=&{ID:a704838c-aeb5-4709-b91c-2460423203a4 Type:ContainerStarted Data:4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4} Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.246064955Z" level=info msg="Created container 04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4: openshift-multus/network-metrics-daemon-5hc5d/network-metrics-daemon" id=83d1be60-05d4-4b11-82ad-7dcda517686a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.246468444Z" level=info msg="Starting container: 04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4" id=2efb21b7-4373-492b-9791-934cb6b71b57 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.275791608Z" level=info msg="Started container" PID=4637 containerID=04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4 description=openshift-multus/network-metrics-daemon-5hc5d/network-metrics-daemon id=2efb21b7-4373-492b-9791-934cb6b71b57 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4 Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.XpQ7z3.mount: Succeeded. Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.278027684Z" level=info msg="Created container fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5: openshift-ingress-canary/ingress-canary-p47qk/serve-healthcheck-canary" id=fa1c5f55-7281-40c0-b81e-8c326c5bc0e7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.280885873Z" level=info msg="Starting container: fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5" id=ab72a4a1-2873-4391-b7b5-674b452271b2 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.309913723Z" level=info msg="Created container 204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef: openshift-network-diagnostics/network-check-target-b2mxx/network-check-target-container" id=6e6a4db4-5e20-431e-a172-1a85535c89ed name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.313773498Z" level=info msg="Starting container: 204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef" id=0f0a0f63-9286-4a61-849f-c487f5250126 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.344153870Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=a4e59630-b8fd-482f-9dda-df349712d36d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.344857150Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a4e59630-b8fd-482f-9dda-df349712d36d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.346221716Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=393b193d-fd5e-490c-8362-98b40563be60 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.346448521Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=393b193d-fd5e-490c-8362-98b40563be60 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.347778672Z" level=info msg="Creating container: openshift-multus/network-metrics-daemon-5hc5d/kube-rbac-proxy" id=a126c990-eca9-4224-aba9-419dffa4c0cd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.347895314Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.351131878Z" level=info msg="Started container" PID=4635 containerID=fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5 description=openshift-ingress-canary/ingress-canary-p47qk/serve-healthcheck-canary id=ab72a4a1-2873-4391-b7b5-674b452271b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4 Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.387228285Z" level=info msg="Started container" PID=4636 containerID=204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef description=openshift-network-diagnostics/network-check-target-b2mxx/network-check-target-container id=0f0a0f63-9286-4a61-849f-c487f5250126 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started crio-conmon-70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16.scope. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: run-runc-70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16-runc.qymoFc.mount: Succeeded. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started libcontainer container 70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started crio-conmon-264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a.scope. Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.566995364Z" level=info msg="Created container 70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni" id=0278f1a9-aaf9-475c-b023-bbcbaf3a5fa4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.567932843Z" level=info msg="Starting container: 70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16" id=1f434160-7262-4b43-8886-4b7d0fa66cdd name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:55 ip-10-0-136-68 systemd-udevd[4800]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:32:55 ip-10-0-136-68 systemd-udevd[4800]: Could not generate persistent MAC address for cb072e675296b6a: No such file or directory Feb 23 16:32:55 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): cb072e675296b6a: link is not ready Feb 23 16:32:55 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cb072e675296b6a: link becomes ready Feb 23 16:32:55 ip-10-0-136-68 NetworkManager[1147]: [1677169975.5796] manager: (cb072e675296b6a): new Veth device (/org/freedesktop/NetworkManager/Devices/32) Feb 23 16:32:55 ip-10-0-136-68 NetworkManager[1147]: [1677169975.5804] device (cb072e675296b6a): carrier: link connected Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started libcontainer container 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a. Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.607408662Z" level=info msg="Started container" PID=4780 containerID=70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16 description=openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni id=1f434160-7262-4b43-8886-4b7d0fa66cdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 Feb 23 16:32:55 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00073|bridge|INFO|bridge br-int: added interface cb072e675296b6a on port 11 Feb 23 16:32:55 ip-10-0-136-68 kernel: device cb072e675296b6a entered promiscuous mode Feb 23 16:32:55 ip-10-0-136-68 NetworkManager[1147]: [1677169975.6293] manager: (cb072e675296b6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33) Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: crio-70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16.scope: Succeeded. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: crio-70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16.scope: Consumed 43ms CPU time Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: crio-conmon-70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16.scope: Succeeded. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: crio-conmon-70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16.scope: Consumed 30ms CPU time Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.675748735Z" level=info msg="Created container 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a: openshift-multus/network-metrics-daemon-5hc5d/kube-rbac-proxy" id=a126c990-eca9-4224-aba9-419dffa4c0cd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.676396199Z" level=info msg="Starting container: 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a" id=015874a2-5759-44b0-a65c-758ebdc4c737 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: 2023-02-23T16:32:55Z [verbose] Del: openshift-network-diagnostics:network-check-target-b2mxx:5acce570-9f3b-4dab-9fed-169a4c110f8c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: I0223 16:32:55.605990 4677 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695002600Z" level=info msg="Successfully cleaned up network for pod 0c751590d84e3dc481ea416fe23ca37243e592f82e01975b9e82ac6743f91e9c" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695034720Z" level=info msg="cleanup sandbox network" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695044665Z" level=info msg="Successfully cleaned up network for pod c4770f7d080720c7718c7bc24396b3024d3f2c4a814b8d4e347d5deb8d6959b6" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695054265Z" level=info msg="cleanup sandbox network" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695062704Z" level=info msg="Successfully cleaned up network for pod 02a316932c00fb6bdfefc407d2c59b98b8c247a18e3653917b91a6127f62d7df" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695073016Z" level=info msg="cleanup sandbox network" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695081620Z" level=info msg="Successfully cleaned up network for pod 2d4ebea480481fe0194b5656ac47e06df6cd5d12530000111fd05928c843d523" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695090797Z" level=info msg="cleanup sandbox network" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695098114Z" level=info msg="Successfully cleaned up network for pod 19bce822018f6e51c5cbfd0e2e3748745954602abb6a774120b1f9e6f36281ae" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695107860Z" level=info msg="cleanup sandbox network" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695872226Z" level=info msg="Got pod network &{Name:ingress-canary-p47qk Namespace:openshift-ingress-canary ID:35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51 UID:a704838c-aeb5-4709-b91c-2460423203a4 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.695965520Z" level=info msg="Started container" PID=4826 containerID=264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a description=openshift-multus/network-metrics-daemon-5hc5d/kube-rbac-proxy id=015874a2-5759-44b0-a65c-758ebdc4c737 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4 Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.696041247Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-p47qk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: I0223 16:32:55.558747 4676 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: 2023-02-23T16:32:55Z [verbose] Add: openshift-dns:dns-default-h4ftg:c072a683-1031-40cb-a1bc-1dac71bca46b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cb072e675296b6a","mac":"fa:e1:92:b7:de:eb"},{"name":"eth0","mac":"0a:58:0a:81:02:06","sandbox":"/var/run/netns/9dd40d86-4a83-40d4-b955-0eaf699cad8c"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.6/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: I0223 16:32:55.720695 4657 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-h4ftg", UID:"c072a683-1031-40cb-a1bc-1dac71bca46b", APIVersion:"v1", ResourceVersion:"44963", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.6/23] from ovn-kubernetes Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.740933382Z" level=info msg="Got pod network &{Name:dns-default-h4ftg Namespace:openshift-dns ID:cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e UID:c072a683-1031-40cb-a1bc-1dac71bca46b NetNS:/var/run/netns/9dd40d86-4a83-40d4-b955-0eaf699cad8c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.741091664Z" level=info msg="Checking pod openshift-dns_dns-default-h4ftg for CNI network multus-cni-network (type=multus)" Feb 23 16:32:55 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:32:55.744879 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc072a683_1031_40cb_a1bc_1dac71bca46b.slice/crio-cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e.scope WatchSource:0}: Error finding container cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e: Status 404 returned error can't find the container with id cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.747229869Z" level=info msg="Ran pod sandbox cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e with infra container: openshift-dns/dns-default-h4ftg/POD" id=65c4060c-5105-4dff-a1cb-3be5138805f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.748255757Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be" id=ee015183-fec0-493a-89e6-967481f5a7d3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.748450369Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:cfff9cbcc1f35a742dfed618d177db6bcfa2a1dc53d3f92391463dfd25565a0c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be],Size_:417970927,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ee015183-fec0-493a-89e6-967481f5a7d3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.749418563Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be" id=bf386f71-3727-405f-aa6e-facfa7739905 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.749576634Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:cfff9cbcc1f35a742dfed618d177db6bcfa2a1dc53d3f92391463dfd25565a0c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8ad4781031b2bada8fe04aba3ba732530afa9b048c6e874934bc4bfefac8be],Size_:417970927,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bf386f71-3727-405f-aa6e-facfa7739905 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.750317647Z" level=info msg="Creating container: openshift-dns/dns-default-h4ftg/dns" id=3d94013f-da06-45d8-9784-12501f7c2155 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.750422067Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started crio-conmon-2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8.scope. Feb 23 16:32:55 ip-10-0-136-68 systemd[1]: Started libcontainer container 2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8. Feb 23 16:32:55 ip-10-0-136-68 rpm-ostree[2896]: Pruned container image layers: 0 Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: 2023-02-23T16:32:55Z [verbose] Del: openshift-ingress-canary:ingress-canary-p47qk:a704838c-aeb5-4709-b91c-2460423203a4:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: I0223 16:32:55.904568 4907 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.950386611Z" level=info msg="Successfully cleaned up network for pod 35539c92883319ba9303dff4d62e621858c65895d17a031283ea570d6c0ffd51" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.950419190Z" level=info msg="cleanup sandbox network" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.951151398Z" level=info msg="Got pod network &{Name:network-metrics-daemon-5hc5d Namespace:openshift-multus ID:f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3 UID:9cd26ba5-46e4-40b5-81e6-74079153d58d NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.951327369Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-5hc5d from CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.971279546Z" level=info msg="Created container 2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8: openshift-dns/dns-default-h4ftg/dns" id=3d94013f-da06-45d8-9784-12501f7c2155 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.971765869Z" level=info msg="Starting container: 2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8" id=f59dc5ee-e5c6-4934-8f2f-2224b9a3977e name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:55.983154311Z" level=info msg="Started container" PID=4978 containerID=2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8 description=openshift-dns/dns-default-h4ftg/dns id=f59dc5ee-e5c6-4934-8f2f-2224b9a3977e name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.037439535Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=b6f27063-0476-400f-9fee-68b817165d06 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.037746081Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b6f27063-0476-400f-9fee-68b817165d06 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.038617839Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=67cb3ec1-f556-4d38-be45-ef04a916753c name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.038851084Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=67cb3ec1-f556-4d38-be45-ef04a916753c name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.039921847Z" level=info msg="Creating container: openshift-dns/dns-default-h4ftg/kube-rbac-proxy" id=5a3edfbe-1de5-47bb-940a-46751e4e5fab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.040118913Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:56 ip-10-0-136-68 systemd[1]: Started crio-conmon-0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7.scope. Feb 23 16:32:56 ip-10-0-136-68 systemd[1]: Started libcontainer container 0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7. Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: 2023-02-23T16:32:55Z [verbose] Del: openshift-multus:network-metrics-daemon-5hc5d:9cd26ba5-46e4-40b5-81e6-74079153d58d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: I0223 16:32:56.143018 5010 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.218910235Z" level=info msg="Successfully cleaned up network for pod f879576786b088968433f4f9b9154c2d899bf400bae1da485c2314f3fec845f3" Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.218942816Z" level=info msg="cleanup sandbox network" Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.218950956Z" level=info msg="Successfully cleaned up network for pod 01ac120e6f0fdd3040e8bdaa8e582520e75a16d62910ceec0a560196072d627a" Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.231624 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerStarted Data:264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a} Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.231690 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerStarted Data:04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4} Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.233376 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p47qk" event=&{ID:a704838c-aeb5-4709-b91c-2460423203a4 Type:ContainerStarted Data:fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5} Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.235485 2112 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16" exitCode=0 Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.235534 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16} Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.236171957Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=4aa7c243-18ec-4635-8e22-e4d76a29f484 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.236336364Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3040fba25f1de00fc7180165bb6fe53ee7a27a50b0d5da5af3a7e0d26700e224,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94],Size_:487631698,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4aa7c243-18ec-4635-8e22-e4d76a29f484 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.236816524Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94" id=bee49aef-7142-4a76-9595-60430777b956 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.236930331Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3040fba25f1de00fc7180165bb6fe53ee7a27a50b0d5da5af3a7e0d26700e224,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af74b3b3f154a6e6124da5499d2d5b52c0d3cfd8f092df598e1d9ca1ada07b94],Size_:487631698,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bee49aef-7142-4a76-9595-60430777b956 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.237018 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerStarted Data:2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8} Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.237045 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerStarted Data:cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e} Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.237460426Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-p9nj2/kube-multus-additional-cni-plugins" id=3220c3f7-1fad-4459-bd5e-ada46fab3c37 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.237566418Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.239211 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-b2mxx" event=&{ID:5acce570-9f3b-4dab-9fed-169a4c110f8c Type:ContainerStarted Data:204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef} Feb 23 16:32:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:56.239302 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.251627567Z" level=info msg="Created container 0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7: openshift-dns/dns-default-h4ftg/kube-rbac-proxy" id=5a3edfbe-1de5-47bb-940a-46751e4e5fab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.251926717Z" level=info msg="Starting container: 0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7" id=5b52e95d-651f-497a-a838-9d662573d170 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.278202177Z" level=info msg="Started container" PID=5049 containerID=0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7 description=openshift-dns/dns-default-h4ftg/kube-rbac-proxy id=5b52e95d-651f-497a-a838-9d662573d170 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e Feb 23 16:32:56 ip-10-0-136-68 systemd[1]: Started crio-conmon-638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5.scope. Feb 23 16:32:56 ip-10-0-136-68 systemd[1]: Started libcontainer container 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5. Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.419443457Z" level=info msg="Created container 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5: openshift-multus/multus-additional-cni-plugins-p9nj2/kube-multus-additional-cni-plugins" id=3220c3f7-1fad-4459-bd5e-ada46fab3c37 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.419905359Z" level=info msg="Starting container: 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5" id=0d54feaf-e55d-4270-90d5-2a2c9b119c5d name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:32:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:32:56.430747734Z" level=info msg="Started container" PID=5102 containerID=638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5 description=openshift-multus/multus-additional-cni-plugins-p9nj2/kube-multus-additional-cni-plugins id=0d54feaf-e55d-4270-90d5-2a2c9b119c5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962 Feb 23 16:32:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:57.242056 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerStarted Data:0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7} Feb 23 16:32:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:57.242178 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-h4ftg" Feb 23 16:32:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:32:57.244746 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerStarted Data:638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5} Feb 23 16:32:58 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00074|connmgr|INFO|br-int<->unix#2: 2002 flow_mods in the 8 s starting 10 s ago (1981 adds, 21 deletes) Feb 23 16:32:59 ip-10-0-136-68 rpm-ostree[2896]: Txn Cleanup on /org/projectatomic/rpmostree1/rhcos successful Feb 23 16:32:59 ip-10-0-136-68 rpm-ostree[2896]: Unlocked sysroot Feb 23 16:32:59 ip-10-0-136-68 rpm-ostree[2896]: Process [pid: 3796 uid: 0 unit: crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope] disconnected from transaction progress Feb 23 16:32:59 ip-10-0-136-68 rpm-ostree[2896]: client(id:machine-config-operator dbus:1.162 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0) vanished; remaining=0 Feb 23 16:32:59 ip-10-0-136-68 rpm-ostree[2896]: In idle state; will auto-exit in 63 seconds Feb 23 16:32:59 ip-10-0-136-68 root[5152]: machine-config-daemon[2377]: Disk currentConfig rendered-worker-897f2f3c67d20d57713bd47f68251b36 overrides node's currentConfig annotation rendered-worker-bf4ec4b50e39e8c4372076d80a2ff138 Feb 23 16:32:59 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.m4QQ8O.mount: Succeeded. Feb 23 16:33:00 ip-10-0-136-68 rpm-ostree[2896]: client(id:machine-config-operator dbus:1.186 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0) added; new total=1 Feb 23 16:33:00 ip-10-0-136-68 rpm-ostree[2896]: client(id:machine-config-operator dbus:1.186 unit:crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope uid:0) vanished; remaining=0 Feb 23 16:33:00 ip-10-0-136-68 rpm-ostree[2896]: In idle state; will auto-exit in 60 seconds Feb 23 16:33:00 ip-10-0-136-68 root[5182]: machine-config-daemon[2377]: Validated on-disk state Feb 23 16:33:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:06.552904 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-h4ftg" Feb 23 16:33:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:06.792282 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeSchedulable" Feb 23 16:33:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00075|connmgr|INFO|br-ex<->unix#13: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:33:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:09.749906435Z" level=warning msg="Found defunct process with PID 4066 (pool)" Feb 23 16:33:09 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.rOGiup.mount: Succeeded. Feb 23 16:33:11 ip-10-0-136-68 root[5301]: machine-config-daemon[2377]: Update completed for config rendered-worker-897f2f3c67d20d57713bd47f68251b36 and node has been successfully uncordoned Feb 23 16:33:11 ip-10-0-136-68 logger[5302]: rendered-worker-897f2f3c67d20d57713bd47f68251b36 Feb 23 16:33:11 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Succeeded. Feb 23 16:33:11 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Consumed 44ms CPU time Feb 23 16:33:14 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.dDnmSZ.mount: Succeeded. Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.892095 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-network-diagnostics/network-check-source-6d479699bc-cppvx] Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.892137 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:33:16 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod1c1afc56_9a13_4fc0_ac79_ec4ea0ebccb6.slice. Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.907148 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl9v2\" (UniqueName: \"kubernetes.io/projected/1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6-kube-api-access-jl9v2\") pod \"network-check-source-6d479699bc-cppvx\" (UID: \"1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6\") " pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.939391 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j] Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.939436 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.942703 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/prometheus-adapter-849c9bc779-55gw7] Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.942737 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:33:16 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod3ff1ba18_ee4b_4151_95d3_ad4742635d6b.slice. Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.953771 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-ingress/router-default-77f788594f-j5twb] Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.953801 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:33:16 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podbac56f54_5b00_421f_b735_a8a998208173.slice. Feb 23 16:33:16 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pode7ec9547_ee4c_4966_997f_719d78dcc31b.slice. Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.977895 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-8g56r] Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.977928 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.979156 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-8klpv] Feb 23 16:33:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:16.979184 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:33:16 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-poda762f29d_1a7e_4d73_9c04_8d5fbbe65b32.slice. Feb 23 16:33:16 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod84367d42_9f7a_49fb_9aab_aa7bc958829f.slice. Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007504 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-thanos-querier-trusted-ca-bundle\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007535 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-bound-sa-token\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007557 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007587 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls\" (UniqueName: \"kubernetes.io/secret/bac56f54-5b00-421f-b735-a8a998208173-tls\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007620 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-tls\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007690 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-metrics-certs\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007751 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-tls\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007831 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007870 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007905 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-audit-log\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007936 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84367d42-9f7a-49fb-9aab-aa7bc958829f-ca-trust-extracted\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.007968 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-stats-auth\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008012 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-jl9v2\" (UniqueName: \"kubernetes.io/projected/1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6-kube-api-access-jl9v2\") pod \"network-check-source-6d479699bc-cppvx\" (UID: \"1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6\") " pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008050 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-metrics-client-ca\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008084 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskbt\" (UniqueName: \"kubernetes.io/projected/bac56f54-5b00-421f-b735-a8a998208173-kube-api-access-fskbt\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008128 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-adapter-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-audit-profiles\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008164 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-image-registry-private-configuration\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008238 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-oauth-cookie\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008301 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-tmpfs\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008323 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-adapter-prometheus-config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-prometheus-config\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008340 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-installation-pull-secrets\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008368 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjhwd\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-kube-api-access-sjhwd\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008437 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhh6\" (UniqueName: \"kubernetes.io/projected/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-kube-api-access-6lhh6\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008484 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-serving-certs-ca-bundle\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008532 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/3ff1ba18-ee4b-4151-95d3-ad4742635d6b-tls-certificates\") pod \"prometheus-operator-admission-webhook-6854f48657-f548j\" (UID: \"3ff1ba18-ee4b-4151-95d3-ad4742635d6b\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008574 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-trusted-ca\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008601 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfgv8\" (UniqueName: \"kubernetes.io/projected/e7ec9547-ee4c-4966-997f-719d78dcc31b-kube-api-access-jfgv8\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008623 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-certificates\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008742 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-grpc-tls\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008782 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-config\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008815 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-default-certificate\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.008837 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ec9547-ee4c-4966-997f-719d78dcc31b-service-ca-bundle\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.011349 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-adapter-849c9bc779-55gw7] Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.054967 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-ingress/router-default-77f788594f-j5twb] Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.055410 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-network-diagnostics/network-check-source-6d479699bc-cppvx] Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.066925 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl9v2\" (UniqueName: \"kubernetes.io/projected/1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6-kube-api-access-jl9v2\") pod \"network-check-source-6d479699bc-cppvx\" (UID: \"1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6\") " pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.076035 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-8g56r] Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.095223 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j] Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.109957 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-oauth-cookie\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110000 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-adapter-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-audit-profiles\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110030 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-image-registry-private-configuration\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110059 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-tmpfs\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110090 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-adapter-prometheus-config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-prometheus-config\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110121 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-installation-pull-secrets\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110153 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-sjhwd\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-kube-api-access-sjhwd\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110187 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-6lhh6\" (UniqueName: \"kubernetes.io/projected/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-kube-api-access-6lhh6\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110222 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-serving-certs-ca-bundle\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110255 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/3ff1ba18-ee4b-4151-95d3-ad4742635d6b-tls-certificates\") pod \"prometheus-operator-admission-webhook-6854f48657-f548j\" (UID: \"3ff1ba18-ee4b-4151-95d3-ad4742635d6b\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110290 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-jfgv8\" (UniqueName: \"kubernetes.io/projected/e7ec9547-ee4c-4966-997f-719d78dcc31b-kube-api-access-jfgv8\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110323 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-certificates\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110354 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-trusted-ca\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110385 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-default-certificate\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110413 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ec9547-ee4c-4966-997f-719d78dcc31b-service-ca-bundle\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110444 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-grpc-tls\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110474 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-config\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110509 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-thanos-querier-trusted-ca-bundle\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110542 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-bound-sa-token\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110579 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110614 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-tls\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110645 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls\" (UniqueName: \"kubernetes.io/secret/bac56f54-5b00-421f-b735-a8a998208173-tls\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110703 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-tls\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110738 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110793 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-metrics-certs\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110825 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-audit-log\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110855 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84367d42-9f7a-49fb-9aab-aa7bc958829f-ca-trust-extracted\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110892 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110924 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-stats-auth\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110960 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-metrics-client-ca\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.110992 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-fskbt\" (UniqueName: \"kubernetes.io/projected/bac56f54-5b00-421f-b735-a8a998208173-kube-api-access-fskbt\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.115956 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-adapter-prometheus-config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-prometheus-config\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.116068 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-adapter-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-audit-profiles\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.116156 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-trusted-ca\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.116994 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-tmpfs\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.117146 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-serving-certs-ca-bundle\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.117220 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-certificates\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.117395 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-oauth-cookie\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.118360 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-image-registry-private-configuration\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.126037 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-installation-pull-secrets\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.137601 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-audit-log\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.138278 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.138588 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-default-certificate\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.139437 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/3ff1ba18-ee4b-4151-95d3-ad4742635d6b-tls-certificates\") pod \"prometheus-operator-admission-webhook-6854f48657-f548j\" (UID: \"3ff1ba18-ee4b-4151-95d3-ad4742635d6b\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.141968 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-grpc-tls\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.143032 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-metrics-certs\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.143568 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84367d42-9f7a-49fb-9aab-aa7bc958829f-ca-trust-extracted\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.144628 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls\" (UniqueName: \"kubernetes.io/secret/bac56f54-5b00-421f-b735-a8a998208173-tls\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.145025 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-stats-auth\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.145505 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.145982 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ec9547-ee4c-4966-997f-719d78dcc31b-service-ca-bundle\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.146095 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-tls\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.146412 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-8klpv] Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.146997 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-config\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.147127 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-tls\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.147545 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-metrics-client-ca\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.147610 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.147812 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-thanos-querier-trusted-ca-bundle\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.155716 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfgv8\" (UniqueName: \"kubernetes.io/projected/e7ec9547-ee4c-4966-997f-719d78dcc31b-kube-api-access-jfgv8\") pod \"router-default-77f788594f-j5twb\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.173099 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lhh6\" (UniqueName: \"kubernetes.io/projected/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-kube-api-access-6lhh6\") pod \"thanos-querier-8654d9f96d-8g56r\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.173200 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-fskbt\" (UniqueName: \"kubernetes.io/projected/bac56f54-5b00-421f-b735-a8a998208173-kube-api-access-fskbt\") pod \"prometheus-adapter-849c9bc779-55gw7\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.183583 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjhwd\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-kube-api-access-sjhwd\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.185105 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-bound-sa-token\") pod \"image-registry-5f79c9c848-8klpv\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.204985 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.205426893Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-source-6d479699bc-cppvx/POD" id=621d2390-9cf1-4327-b9f3-c7457f62f98f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.205499693Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.224065975Z" level=info msg="Got pod network &{Name:network-check-source-6d479699bc-cppvx Namespace:openshift-network-diagnostics ID:a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7 UID:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 NetNS:/var/run/netns/97c36b59-09a3-45a1-8282-495805886245 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.224090451Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-source-6d479699bc-cppvx to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.253888 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.254354859Z" level=info msg="Running pod sandbox: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j/POD" id=1eea0579-c3cb-4bc7-b29c-ed4bec59a81e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.254409973Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.265790 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.266258087Z" level=info msg="Running pod sandbox: openshift-monitoring/prometheus-adapter-849c9bc779-55gw7/POD" id=fb89529b-6073-4713-861e-af425b53782e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.266352584Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.273601 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.274112364Z" level=info msg="Running pod sandbox: openshift-ingress/router-default-77f788594f-j5twb/POD" id=162b1efb-2876-4b0e-8e05-df4c4fab83c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.274162638Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.279649078Z" level=info msg="Got pod network &{Name:prometheus-operator-admission-webhook-6854f48657-f548j Namespace:openshift-monitoring ID:d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66 UID:3ff1ba18-ee4b-4151-95d3-ad4742635d6b NetNS:/var/run/netns/4b34e3bf-9614-43bb-b8a4-15840ea1212a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.279724407Z" level=info msg="Adding pod openshift-monitoring_prometheus-operator-admission-webhook-6854f48657-f548j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.287018688Z" level=info msg="Got pod network &{Name:prometheus-adapter-849c9bc779-55gw7 Namespace:openshift-monitoring ID:94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24 UID:bac56f54-5b00-421f-b735-a8a998208173 NetNS:/var/run/netns/fa5f7fe0-b997-46ef-b3b4-8c0a981fd91c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.287047376Z" level=info msg="Adding pod openshift-monitoring_prometheus-adapter-849c9bc779-55gw7 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.298107 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.298521279Z" level=info msg="Running pod sandbox: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/POD" id=69417d88-7b53-43ad-ac6f-dc1e98ad27f0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.298568181Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.301994824Z" level=info msg="Got pod network &{Name:router-default-77f788594f-j5twb Namespace:openshift-ingress ID:259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0 UID:e7ec9547-ee4c-4966-997f-719d78dcc31b NetNS:/var/run/netns/8f6edfdf-3077-4118-8c21-14cdffcf5d50 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.302028454Z" level=info msg="Adding pod openshift-ingress_router-default-77f788594f-j5twb to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.302331 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.302705424Z" level=info msg="Running pod sandbox: openshift-image-registry/image-registry-5f79c9c848-8klpv/POD" id=4f30fb1b-f986-417f-8be0-bf13fb2482a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.302741038Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.331538541Z" level=info msg="Got pod network &{Name:thanos-querier-8654d9f96d-8g56r Namespace:openshift-monitoring ID:7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 UID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 NetNS:/var/run/netns/f061bf1e-6ed2-4ea2-9db4-e04b14df9997 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.331582046Z" level=info msg="Adding pod openshift-monitoring_thanos-querier-8654d9f96d-8g56r to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.348286624Z" level=info msg="Got pod network &{Name:image-registry-5f79c9c848-8klpv Namespace:openshift-image-registry ID:d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552 UID:84367d42-9f7a-49fb-9aab-aa7bc958829f NetNS:/var/run/netns/17a4187f-d070-4d4c-b6a9-e7b96c85befa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.348327344Z" level=info msg="Adding pod openshift-image-registry_image-registry-5f79c9c848-8klpv to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5483]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5483]: Could not generate persistent MAC address for a86108175c50cc4: No such file or directory Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): a86108175c50cc4: link is not ready Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): a86108175c50cc4: link becomes ready Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.4294] manager: (a86108175c50cc4): new Veth device (/org/freedesktop/NetworkManager/Devices/34) Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.4372] device (a86108175c50cc4): carrier: link connected Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.4639] manager: (a86108175c50cc4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35) Feb 23 16:33:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00076|bridge|INFO|bridge br-int: added interface a86108175c50cc4 on port 12 Feb 23 16:33:17 ip-10-0-136-68 kernel: device a86108175c50cc4 entered promiscuous mode Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5517]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.5534] manager: (d57b42d0ba3aca4): new Veth device (/org/freedesktop/NetworkManager/Devices/36) Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.5543] device (d57b42d0ba3aca4): carrier: link connected Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): d57b42d0ba3aca4: link is not ready Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): d57b42d0ba3aca4: link becomes ready Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5517]: Could not generate persistent MAC address for d57b42d0ba3aca4: No such file or directory Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.586903 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-network-diagnostics/network-check-source-6d479699bc-cppvx] Feb 23 16:33:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00077|bridge|INFO|bridge br-int: added interface d57b42d0ba3aca4 on port 13 Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.5924] manager: (d57b42d0ba3aca4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37) Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: I0223 16:33:17.406171 5407 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: 2023-02-23T16:33:17Z [verbose] Add: openshift-network-diagnostics:network-check-source-6d479699bc-cppvx:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a86108175c50cc4","mac":"4a:c4:f5:6f:ba:32"},{"name":"eth0","mac":"0a:58:0a:81:02:0e","sandbox":"/var/run/netns/97c36b59-09a3-45a1-8282-495805886245"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.14/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: I0223 16:33:17.560756 5400 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-source-6d479699bc-cppvx", UID:"1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6", APIVersion:"v1", ResourceVersion:"45467", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.14/23] from ovn-kubernetes Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.593736960Z" level=info msg="Got pod network &{Name:network-check-source-6d479699bc-cppvx Namespace:openshift-network-diagnostics ID:a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7 UID:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 NetNS:/var/run/netns/97c36b59-09a3-45a1-8282-495805886245 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.594053371Z" level=info msg="Checking pod openshift-network-diagnostics_network-check-source-6d479699bc-cppvx for CNI network multus-cni-network (type=multus)" Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:33:17.597482 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c1afc56_9a13_4fc0_ac79_ec4ea0ebccb6.slice/crio-a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7.scope WatchSource:0}: Error finding container a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7: Status 404 returned error can't find the container with id a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7 Feb 23 16:33:17 ip-10-0-136-68 kernel: device d57b42d0ba3aca4 entered promiscuous mode Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.632564498Z" level=info msg="Ran pod sandbox a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7 with infra container: openshift-network-diagnostics/network-check-source-6d479699bc-cppvx/POD" id=621d2390-9cf1-4327-b9f3-c7457f62f98f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.634717609Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60" id=d2862ae0-d7e9-4828-a2e6-3c892251bd03 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.635108835Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fbfabc25c264657111b70d2537c63f40bd1221c9fa96f133a4ea4c49f2c732ee,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60],Size_:512530138,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d2862ae0-d7e9-4828-a2e6-3c892251bd03 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.636270615Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60" id=b37108ca-0c82-4580-8bb0-a48f88034ac1 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.636636321Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fbfabc25c264657111b70d2537c63f40bd1221c9fa96f133a4ea4c49f2c732ee,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12689c58126296eadc7e46ef53bd571e445459a42516711155470fe35c1ccd60],Size_:512530138,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b37108ca-0c82-4580-8bb0-a48f88034ac1 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.639428123Z" level=info msg="Creating container: openshift-network-diagnostics/network-check-source-6d479699bc-cppvx/check-endpoints" id=e7fff996-a735-4d8a-8fc4-0ff55c873fab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.639568525Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524.scope. Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.694532 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j] Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: I0223 16:33:17.540988 5432 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: 2023-02-23T16:33:17Z [verbose] Add: openshift-monitoring:prometheus-operator-admission-webhook-6854f48657-f548j:3ff1ba18-ee4b-4151-95d3-ad4742635d6b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d57b42d0ba3aca4","mac":"46:21:29:ae:2c:f9"},{"name":"eth0","mac":"0a:58:0a:81:02:0f","sandbox":"/var/run/netns/4b34e3bf-9614-43bb-b8a4-15840ea1212a"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.15/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: I0223 16:33:17.667898 5415 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"prometheus-operator-admission-webhook-6854f48657-f548j", UID:"3ff1ba18-ee4b-4151-95d3-ad4742635d6b", APIVersion:"v1", ResourceVersion:"45477", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.15/23] from ovn-kubernetes Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.705402288Z" level=info msg="Got pod network &{Name:prometheus-operator-admission-webhook-6854f48657-f548j Namespace:openshift-monitoring ID:d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66 UID:3ff1ba18-ee4b-4151-95d3-ad4742635d6b NetNS:/var/run/netns/4b34e3bf-9614-43bb-b8a4-15840ea1212a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5549]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.706204275Z" level=info msg="Checking pod openshift-monitoring_prometheus-operator-admission-webhook-6854f48657-f548j for CNI network multus-cni-network (type=multus)" Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.7092] manager: (94ff293ae607edb): new Veth device (/org/freedesktop/NetworkManager/Devices/38) Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:33:17.709573 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ff1ba18_ee4b_4151_95d3_ad4742635d6b.slice/crio-d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66.scope WatchSource:0}: Error finding container d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66: Status 404 returned error can't find the container with id d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66 Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5549]: Could not generate persistent MAC address for 94ff293ae607edb: No such file or directory Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.7120] device (94ff293ae607edb): carrier: link connected Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 94ff293ae607edb: link is not ready Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 94ff293ae607edb: link becomes ready Feb 23 16:33:17 ip-10-0-136-68 systemd[1]: Started libcontainer container 7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524. Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.742308797Z" level=info msg="Ran pod sandbox d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66 with infra container: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j/POD" id=1eea0579-c3cb-4bc7-b29c-ed4bec59a81e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.746067561Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83" id=365e2c06-df13-4b6f-aa60-c87851a6c20d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.746267278Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7349fb94605b9a588404c2db5677b270dcc908f8f25eb5d9372a2dfca6163d88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83],Size_:388514099,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=365e2c06-df13-4b6f-aa60-c87851a6c20d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.747585266Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83" id=ac4e8dd7-843b-42c8-b51d-c0461b875580 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.747896686Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7349fb94605b9a588404c2db5677b270dcc908f8f25eb5d9372a2dfca6163d88,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d39203b28bfd776f78b1a186dd1085d8558816616a7df2f65f0d7140b4867e83],Size_:388514099,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ac4e8dd7-843b-42c8-b51d-c0461b875580 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.749623406Z" level=info msg="Creating container: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j/prometheus-operator-admission-webhook" id=fb122c2d-fca2-4979-992d-aeb8dd9ed0cb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.749856410Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.7604] manager: (94ff293ae607edb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39) Feb 23 16:33:17 ip-10-0-136-68 systemd[1]: Started crio-conmon-f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e.scope. Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5578]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5580]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.7894] manager: (259ac42a2037be7): new Veth device (/org/freedesktop/NetworkManager/Devices/40) Feb 23 16:33:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00078|bridge|INFO|bridge br-int: added interface 94ff293ae607edb on port 14 Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5578]: Could not generate persistent MAC address for 259ac42a2037be7: No such file or directory Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5580]: Could not generate persistent MAC address for d4785ceb1d1a738: No such file or directory Feb 23 16:33:17 ip-10-0-136-68 kernel: device 94ff293ae607edb entered promiscuous mode Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 259ac42a2037be7: link is not ready Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): d4785ceb1d1a738: link is not ready Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 259ac42a2037be7: link becomes ready Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): d4785ceb1d1a738: link becomes ready Feb 23 16:33:17 ip-10-0-136-68 systemd[1]: Started libcontainer container f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e. Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.8124] manager: (d4785ceb1d1a738): new Veth device (/org/freedesktop/NetworkManager/Devices/41) Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.8162] device (259ac42a2037be7): carrier: link connected Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5595]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.8173] device (d4785ceb1d1a738): carrier: link connected Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 7dbdc33b9d16d63: link is not ready Feb 23 16:33:17 ip-10-0-136-68 systemd-udevd[5595]: Could not generate persistent MAC address for 7dbdc33b9d16d63: No such file or directory Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.8251] manager: (7dbdc33b9d16d63): new Veth device (/org/freedesktop/NetworkManager/Devices/42) Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.8320] device (7dbdc33b9d16d63): carrier: link connected Feb 23 16:33:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 7dbdc33b9d16d63: link becomes ready Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.860864874Z" level=info msg="Created container 7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524: openshift-network-diagnostics/network-check-source-6d479699bc-cppvx/check-endpoints" id=e7fff996-a735-4d8a-8fc4-0ff55c873fab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.881676727Z" level=info msg="Starting container: 7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524" id=db4aef5d-2463-40b9-be8d-216c9ab61670 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00079|bridge|INFO|bridge br-int: added interface 7dbdc33b9d16d63 on port 15 Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.9088] manager: (7dbdc33b9d16d63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43) Feb 23 16:33:17 ip-10-0-136-68 kernel: device 7dbdc33b9d16d63 entered promiscuous mode Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.9297] manager: (259ac42a2037be7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44) Feb 23 16:33:17 ip-10-0-136-68 NetworkManager[1147]: [1677169997.9371] manager: (d4785ceb1d1a738): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45) Feb 23 16:33:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00080|bridge|INFO|bridge br-int: added interface 259ac42a2037be7 on port 16 Feb 23 16:33:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00081|bridge|INFO|bridge br-int: added interface d4785ceb1d1a738 on port 17 Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.953780309Z" level=info msg="Started container" PID=5564 containerID=7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524 description=openshift-network-diagnostics/network-check-source-6d479699bc-cppvx/check-endpoints id=db4aef5d-2463-40b9-be8d-216c9ab61670 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7 Feb 23 16:33:17 ip-10-0-136-68 kernel: device 259ac42a2037be7 entered promiscuous mode Feb 23 16:33:17 ip-10-0-136-68 kernel: device d4785ceb1d1a738 entered promiscuous mode Feb 23 16:33:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:17.982064 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-adapter-849c9bc779-55gw7] Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: I0223 16:33:17.689785 5441 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: 2023-02-23T16:33:17Z [verbose] Add: openshift-monitoring:prometheus-adapter-849c9bc779-55gw7:bac56f54-5b00-421f-b735-a8a998208173:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"94ff293ae607edb","mac":"a6:6d:89:c6:43:5e"},{"name":"eth0","mac":"0a:58:0a:81:02:10","sandbox":"/var/run/netns/fa5f7fe0-b997-46ef-b3b4-8c0a981fd91c"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.16/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: I0223 16:33:17.959610 5421 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"prometheus-adapter-849c9bc779-55gw7", UID:"bac56f54-5b00-421f-b735-a8a998208173", APIVersion:"v1", ResourceVersion:"45457", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.16/23] from ovn-kubernetes Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.990764282Z" level=info msg="Got pod network &{Name:prometheus-adapter-849c9bc779-55gw7 Namespace:openshift-monitoring ID:94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24 UID:bac56f54-5b00-421f-b735-a8a998208173 NetNS:/var/run/netns/fa5f7fe0-b997-46ef-b3b4-8c0a981fd91c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:17.990914487Z" level=info msg="Checking pod openshift-monitoring_prometheus-adapter-849c9bc779-55gw7 for CNI network multus-cni-network (type=multus)" Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:33:18.000147 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbac56f54_5b00_421f_b735_a8a998208173.slice/crio-94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24.scope WatchSource:0}: Error finding container 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24: Status 404 returned error can't find the container with id 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.001194821Z" level=info msg="Ran pod sandbox 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24 with infra container: openshift-monitoring/prometheus-adapter-849c9bc779-55gw7/POD" id=fb89529b-6073-4713-861e-af425b53782e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.002018693Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7485774b6731351b7b283e72efb0a0c07d69623ac09616ba89f52466b6fec053" id=026d50dc-1ccf-4a7a-a2a4-2d5efa6eb94f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.002203421Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7485774b6731351b7b283e72efb0a0c07d69623ac09616ba89f52466b6fec053 not found" id=026d50dc-1ccf-4a7a-a2a4-2d5efa6eb94f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.005020 2112 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.005985300Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7485774b6731351b7b283e72efb0a0c07d69623ac09616ba89f52466b6fec053" id=1cd94c1b-9552-4dc6-8d33-630ce28b168b name=/runtime.v1.ImageService/PullImage Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.009440129Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7485774b6731351b7b283e72efb0a0c07d69623ac09616ba89f52466b6fec053\"" Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.041754779Z" level=info msg="Created container f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j/prometheus-operator-admission-webhook" id=fb122c2d-fca2-4979-992d-aeb8dd9ed0cb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.043792530Z" level=info msg="Starting container: f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e" id=87e0eaf6-2f01-4ab4-8b15-f1a895cfdef6 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.064831569Z" level=info msg="Started container" PID=5610 containerID=f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e description=openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j/prometheus-operator-admission-webhook id=87e0eaf6-2f01-4ab4-8b15-f1a895cfdef6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66 Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.083559 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-8g56r] Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: I0223 16:33:17.792712 5472 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: 2023-02-23T16:33:18Z [verbose] Add: openshift-monitoring:thanos-querier-8654d9f96d-8g56r:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7dbdc33b9d16d63","mac":"c2:b6:cc:01:49:c1"},{"name":"eth0","mac":"0a:58:0a:81:02:12","sandbox":"/var/run/netns/f061bf1e-6ed2-4ea2-9db4-e04b14df9997"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.18/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: I0223 16:33:18.046485 5449 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"thanos-querier-8654d9f96d-8g56r", UID:"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32", APIVersion:"v1", ResourceVersion:"45481", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.18/23] from ovn-kubernetes Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.086506749Z" level=info msg="Got pod network &{Name:thanos-querier-8654d9f96d-8g56r Namespace:openshift-monitoring ID:7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 UID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 NetNS:/var/run/netns/f061bf1e-6ed2-4ea2-9db4-e04b14df9997 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.086711033Z" level=info msg="Checking pod openshift-monitoring_thanos-querier-8654d9f96d-8g56r for CNI network multus-cni-network (type=multus)" Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:33:18.090974 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda762f29d_1a7e_4d73_9c04_8d5fbbe65b32.slice/crio-7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6.scope WatchSource:0}: Error finding container 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6: Status 404 returned error can't find the container with id 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.094529002Z" level=info msg="Ran pod sandbox 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 with infra container: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/POD" id=69417d88-7b53-43ad-ac6f-dc1e98ad27f0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.095634742Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=2f10bcc7-9259-43d6-9c28-093c6ff246b3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.096008829Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2f10bcc7-9259-43d6-9c28-093c6ff246b3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.096760598Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=f2356ac6-8933-4e68-9681-fa2f4f5b0d4c name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.097027652Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f2356ac6-8933-4e68-9681-fa2f4f5b0d4c name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.097781597Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/thanos-query" id=cdee005e-5948-40a3-965c-e7a57ca380ab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.097875822Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.114734 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-ingress/router-default-77f788594f-j5twb] Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: I0223 16:33:17.759540 5464 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: 2023-02-23T16:33:18Z [verbose] Add: openshift-ingress:router-default-77f788594f-j5twb:e7ec9547-ee4c-4966-997f-719d78dcc31b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"259ac42a2037be7","mac":"de:76:2a:2b:c4:cd"},{"name":"eth0","mac":"0a:58:0a:81:02:11","sandbox":"/var/run/netns/8f6edfdf-3077-4118-8c21-14cdffcf5d50"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.17/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: I0223 16:33:18.079723 5430 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress", Name:"router-default-77f788594f-j5twb", UID:"e7ec9547-ee4c-4966-997f-719d78dcc31b", APIVersion:"v1", ResourceVersion:"45476", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.17/23] from ovn-kubernetes Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.115477403Z" level=info msg="Got pod network &{Name:router-default-77f788594f-j5twb Namespace:openshift-ingress ID:259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0 UID:e7ec9547-ee4c-4966-997f-719d78dcc31b NetNS:/var/run/netns/8f6edfdf-3077-4118-8c21-14cdffcf5d50 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.115563783Z" level=info msg="Checking pod openshift-ingress_router-default-77f788594f-j5twb for CNI network multus-cni-network (type=multus)" Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:33:18.122752 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7ec9547_ee4c_4966_997f_719d78dcc31b.slice/crio-259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0.scope WatchSource:0}: Error finding container 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0: Status 404 returned error can't find the container with id 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.126719853Z" level=info msg="Ran pod sandbox 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0 with infra container: openshift-ingress/router-default-77f788594f-j5twb/POD" id=162b1efb-2876-4b0e-8e05-df4c4fab83c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.127641115Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d617ec9a2e82f2b2bf2dcab9695d49426db17674bf970d4b1dc146d66db863b" id=f7aef8ff-a3ed-477f-93d3-70dee2cfc087 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.129257488Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d617ec9a2e82f2b2bf2dcab9695d49426db17674bf970d4b1dc146d66db863b not found" id=f7aef8ff-a3ed-477f-93d3-70dee2cfc087 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.129035 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-8klpv] Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.131266762Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d617ec9a2e82f2b2bf2dcab9695d49426db17674bf970d4b1dc146d66db863b" id=99f307d3-06bf-4fa4-b94f-ef5d77411bb2 name=/runtime.v1.ImageService/PullImage Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.132451151Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d617ec9a2e82f2b2bf2dcab9695d49426db17674bf970d4b1dc146d66db863b\"" Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: I0223 16:33:17.771912 5491 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: 2023-02-23T16:33:18Z [verbose] Add: openshift-image-registry:image-registry-5f79c9c848-8klpv:84367d42-9f7a-49fb-9aab-aa7bc958829f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d4785ceb1d1a738","mac":"c2:93:3f:89:32:2a"},{"name":"eth0","mac":"0a:58:0a:81:02:13","sandbox":"/var/run/netns/17a4187f-d070-4d4c-b6a9-e7b96c85befa"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.19/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: I0223 16:33:18.104383 5458 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-5f79c9c848-8klpv", UID:"84367d42-9f7a-49fb-9aab-aa7bc958829f", APIVersion:"v1", ResourceVersion:"45483", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.19/23] from ovn-kubernetes Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.133735182Z" level=info msg="Got pod network &{Name:image-registry-5f79c9c848-8klpv Namespace:openshift-image-registry ID:d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552 UID:84367d42-9f7a-49fb-9aab-aa7bc958829f NetNS:/var/run/netns/17a4187f-d070-4d4c-b6a9-e7b96c85befa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.133875504Z" level=info msg="Checking pod openshift-image-registry_image-registry-5f79c9c848-8klpv for CNI network multus-cni-network (type=multus)" Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:33:18.135608 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84367d42_9f7a_49fb_9aab_aa7bc958829f.slice/crio-d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552.scope WatchSource:0}: Error finding container d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552: Status 404 returned error can't find the container with id d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.137596548Z" level=info msg="Ran pod sandbox d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552 with infra container: openshift-image-registry/image-registry-5f79c9c848-8klpv/POD" id=4f30fb1b-f986-417f-8be0-bf13fb2482a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.138133452Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=f3caa346-657c-4f67-905a-db622432e588 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.138274362Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b8111819f25b8194478d55593ca125a634ee92d9d5e61866f09e80f1b59af18b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae],Size_:428240621,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f3caa346-657c-4f67-905a-db622432e588 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.138777560Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae" id=8749ffe7-1600-4c11-8b84-a52e4d6d0deb name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.138920623Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b8111819f25b8194478d55593ca125a634ee92d9d5e61866f09e80f1b59af18b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:462953440366660026537c18defeaf1f0e85dde5d1231aa35ab26e7e996959ae],Size_:428240621,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8749ffe7-1600-4c11-8b84-a52e4d6d0deb name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.139729437Z" level=info msg="Creating container: openshift-image-registry/image-registry-5f79c9c848-8klpv/registry" id=39a1a8c2-5b95-469e-bfde-863962d0635d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.139835006Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started crio-conmon-ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08.scope. Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started crio-conmon-d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd.scope. Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started libcontainer container ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08. Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started libcontainer container d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd. Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.279969 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-77f788594f-j5twb" event=&{ID:e7ec9547-ee4c-4966-997f-719d78dcc31b Type:ContainerStarted Data:259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0} Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.280765 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" event=&{ID:3ff1ba18-ee4b-4151-95d3-ad4742635d6b Type:ContainerStarted Data:f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e} Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.280792 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" event=&{ID:3ff1ba18-ee4b-4151-95d3-ad4742635d6b Type:ContainerStarted Data:d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66} Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.281523 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.282153 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" event=&{ID:84367d42-9f7a-49fb-9aab-aa7bc958829f Type:ContainerStarted Data:d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552} Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.282764 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerStarted Data:7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6} Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.283640 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" event=&{ID:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 Type:ContainerStarted Data:7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524} Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.283789 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" event=&{ID:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 Type:ContainerStarted Data:a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7} Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.284782 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" event=&{ID:bac56f54-5b00-421f-b735-a8a998208173 Type:ContainerStarted Data:94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24} Feb 23 16:33:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:18.292870 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.299640840Z" level=info msg="Created container ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/thanos-query" id=cdee005e-5948-40a3-965c-e7a57ca380ab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.301922141Z" level=info msg="Starting container: ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08" id=8f77bc5a-a532-4fa1-bd2c-6cceb12f6d58 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.314415414Z" level=info msg="Created container d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd: openshift-image-registry/image-registry-5f79c9c848-8klpv/registry" id=39a1a8c2-5b95-469e-bfde-863962d0635d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.314822645Z" level=info msg="Starting container: d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" id=ecacdfe0-d098-4f15-b02d-6c956f6ce938 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.315925794Z" level=info msg="Started container" PID=5726 containerID=ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08 description=openshift-monitoring/thanos-querier-8654d9f96d-8g56r/thanos-query id=8f77bc5a-a532-4fa1-bd2c-6cceb12f6d58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.328288865Z" level=info msg="Started container" PID=5732 containerID=d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd description=openshift-image-registry/image-registry-5f79c9c848-8klpv/registry id=ecacdfe0-d098-4f15-b02d-6c956f6ce938 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.337364421Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=8c8a5d9a-ade5-4900-8145-c7d4e55526e2 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.337558851Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8c8a5d9a-ade5-4900-8145-c7d4e55526e2 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.342245579Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=1972ca9d-a6de-4175-882d-a63f70e6622a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.342425299Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1972ca9d-a6de-4175-882d-a63f70e6622a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.343515118Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/oauth-proxy" id=735fb15b-29d6-4812-a569-f64da060ff53 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.343619074Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started crio-conmon-e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b.scope. Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started libcontainer container e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b. Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.493196530Z" level=info msg="Created container e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/oauth-proxy" id=735fb15b-29d6-4812-a569-f64da060ff53 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.493631517Z" level=info msg="Starting container: e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b" id=d449a711-c04f-42ef-9b6f-5e386c6e0643 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.505590466Z" level=info msg="Started container" PID=5797 containerID=e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b description=openshift-monitoring/thanos-querier-8654d9f96d-8g56r/oauth-proxy id=d449a711-c04f-42ef-9b6f-5e386c6e0643 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.524774015Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=beb51ce8-3901-46cd-b26e-222c4232f8dd name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.525194732Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=beb51ce8-3901-46cd-b26e-222c4232f8dd name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.526128324Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=4ad47e8d-f7f9-49cc-9329-d711fe38f149 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.526283155Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4ad47e8d-f7f9-49cc-9329-d711fe38f149 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.527442676Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy" id=c22cdded-f51c-4f0f-8042-e5e59f7de842 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.527564671Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started crio-conmon-7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d.scope. Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started libcontainer container 7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d. Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.676074772Z" level=info msg="Created container 7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy" id=c22cdded-f51c-4f0f-8042-e5e59f7de842 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.676731307Z" level=info msg="Starting container: 7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d" id=67b20de8-cd27-4d88-bcad-ab6897caed96 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.691294692Z" level=info msg="Started container" PID=5856 containerID=7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d description=openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy id=67b20de8-cd27-4d88-bcad-ab6897caed96 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.708168875Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=81c572f6-d5b2-435a-b66c-7a022263c2c9 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.708368741Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=81c572f6-d5b2-435a-b66c-7a022263c2c9 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.709197764Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=2eb0f174-f7aa-4fbc-aad4-d0a0bebed790 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.709357298Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2eb0f174-f7aa-4fbc-aad4-d0a0bebed790 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.710122726Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/prom-label-proxy" id=cbd50601-a509-49df-8943-0898ecd1adda name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.710243671Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started crio-conmon-31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d.scope. Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started libcontainer container 31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d. Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.843078225Z" level=info msg="Created container 31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/prom-label-proxy" id=cbd50601-a509-49df-8943-0898ecd1adda name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.843430218Z" level=info msg="Starting container: 31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d" id=228fa4f9-0398-4234-9032-31faf5d3351e name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.855130856Z" level=info msg="Started container" PID=5906 containerID=31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d description=openshift-monitoring/thanos-querier-8654d9f96d-8g56r/prom-label-proxy id=228fa4f9-0398-4234-9032-31faf5d3351e name=/runtime.v1.RuntimeService/StartContainer sandboxID=7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.863793578Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=c180ae9b-77b6-45a8-aa36-c3b257397e2a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.863984403Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c180ae9b-77b6-45a8-aa36-c3b257397e2a name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.864563992Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=e1925f35-99ad-404a-b78b-151428382b9b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.864764164Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e1925f35-99ad-404a-b78b-151428382b9b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.865421105Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-rules" id=307d1ea1-4cbf-4d65-bc69-2da08ae13eef name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.865532242Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started crio-conmon-8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c.scope. Feb 23 16:33:18 ip-10-0-136-68 systemd[1]: Started libcontainer container 8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c. Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.931845692Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7485774b6731351b7b283e72efb0a0c07d69623ac09616ba89f52466b6fec053\"" Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.988156137Z" level=info msg="Created container 8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-rules" id=307d1ea1-4cbf-4d65-bc69-2da08ae13eef name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.988635662Z" level=info msg="Starting container: 8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c" id=7b6a6483-1741-428a-8feb-e5909cb7b024 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:18.997486147Z" level=info msg="Started container" PID=5952 containerID=8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c description=openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-rules id=7b6a6483-1741-428a-8feb-e5909cb7b024 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.007880126Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=1a24fcbe-9d5d-44ca-8556-50793597ab26 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.008062879Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1a24fcbe-9d5d-44ca-8556-50793597ab26 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.008775534Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=9ae0d0cb-6561-440c-a9de-d91e5df50aaf name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.008923723Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9ae0d0cb-6561-440c-a9de-d91e5df50aaf name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.009760466Z" level=info msg="Creating container: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-metrics" id=5dd8ed9b-7abd-42f8-9744-c361315bd0f4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.009860889Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:19 ip-10-0-136-68 systemd[1]: Started crio-conmon-771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba.scope. Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.041867409Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d617ec9a2e82f2b2bf2dcab9695d49426db17674bf970d4b1dc146d66db863b\"" Feb 23 16:33:19 ip-10-0-136-68 systemd[1]: Started libcontainer container 771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba. Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.157102381Z" level=info msg="Created container 771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-metrics" id=5dd8ed9b-7abd-42f8-9744-c361315bd0f4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.158047525Z" level=info msg="Starting container: 771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba" id=986e85e8-26a6-44dc-beb2-03ce7dbbdda1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.165704113Z" level=info msg="Started container" PID=5996 containerID=771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba description=openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-metrics id=986e85e8-26a6-44dc-beb2-03ce7dbbdda1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.287879 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" event=&{ID:84367d42-9f7a-49fb-9aab-aa7bc958829f Type:ContainerStarted Data:d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd} Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.288004 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.290139 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerStarted Data:771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba} Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.290164 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerStarted Data:8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c} Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.290178 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerStarted Data:31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d} Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.290192 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerStarted Data:7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d} Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.290206 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerStarted Data:e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b} Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.290220 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerStarted Data:ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08} Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.398897 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.398946 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:33:19 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod457a2ca9_5414_414b_8731_42d2430a3275.slice. Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440090 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440147 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-config-out\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440183 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-web-config\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440214 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-config-volume\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440242 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440275 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440305 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcr8w\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-kube-api-access-mcr8w\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440336 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440371 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440403 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440437 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.440468 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-tls-assets\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.460294 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.480282 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/prometheus-k8s-1] Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.480316 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:33:19 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podde160b09_a82e_4c1c_855b_4dfb3b3cbd7c.slice. Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.540864 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-tls-assets\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.540919 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.540948 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.540975 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541005 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541029 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-web-config\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541207 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541261 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-config-out\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541371 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541404 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541431 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541462 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-web-config\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541491 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541522 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5r25\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-kube-api-access-s5r25\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541554 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541585 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541613 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541640 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-config-volume\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.541996 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-config-out\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.542435 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.542490 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.542517 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.542544 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.542572 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-mcr8w\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-kube-api-access-mcr8w\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.542597 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.542626 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.543049 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.543171 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.543212 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.543329 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.543364 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.543456 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config-out\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.543629 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.544410 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.545031 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.545795 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-tls-assets\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.546026 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-web-config\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.546414 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-config-volume\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.548298 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.549083 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.549749 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-k8s-1] Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.557245 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.559214 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.570098 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcr8w\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-kube-api-access-mcr8w\") pod \"alertmanager-main-1\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644516 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644560 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-web-config\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644588 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644617 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644645 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644698 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644727 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644758 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644785 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644810 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644834 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-s5r25\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-kube-api-access-s5r25\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644861 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644887 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644912 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644941 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644967 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.644991 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config-out\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.645020 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.645045 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.645725 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.649462 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.649989 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.650060 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.651211 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config-out\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.656879 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.657542 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-proxy\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.658175 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-web-config\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.658493 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.658955 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.659547 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.659896 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.660253 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.660614 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.660710 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.661186 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-etcd-client-certs\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.661277 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.665512 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.667616 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5r25\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-kube-api-access-s5r25\") pod \"prometheus-k8s-1\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.717461 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.717975157Z" level=info msg="Running pod sandbox: openshift-monitoring/alertmanager-main-1/POD" id=d035511f-5895-4db0-8a0d-75e526423937 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.718033278Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.738440337Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 UID:457a2ca9-5414-414b-8731-42d2430a3275 NetNS:/var/run/netns/13a1e099-392b-4597-85ad-d1b6663e05ea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.738474728Z" level=info msg="Adding pod openshift-monitoring_alertmanager-main-1 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:33:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:19.795848 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.796425191Z" level=info msg="Running pod sandbox: openshift-monitoring/prometheus-k8s-1/POD" id=ddbefc96-8fab-4f92-86c9-7f41e772f584 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.796481618Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.812747998Z" level=info msg="Got pod network &{Name:prometheus-k8s-1 Namespace:openshift-monitoring ID:d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f UID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c NetNS:/var/run/netns/a6d016c1-2d14-4c02-896b-9581b100a834 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:19.812779683Z" level=info msg="Adding pod openshift-monitoring_prometheus-k8s-1 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 16:33:19 ip-10-0-136-68 systemd-udevd[6105]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:33:19 ip-10-0-136-68 systemd-udevd[6105]: Could not generate persistent MAC address for 2dd14dc79891cf2: No such file or directory Feb 23 16:33:19 ip-10-0-136-68 NetworkManager[1147]: [1677169999.9093] manager: (2dd14dc79891cf2): new Veth device (/org/freedesktop/NetworkManager/Devices/46) Feb 23 16:33:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 2dd14dc79891cf2: link is not ready Feb 23 16:33:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 16:33:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 16:33:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 2dd14dc79891cf2: link becomes ready Feb 23 16:33:19 ip-10-0-136-68 NetworkManager[1147]: [1677169999.9261] device (2dd14dc79891cf2): carrier: link connected Feb 23 16:33:19 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00082|bridge|INFO|bridge br-int: added interface 2dd14dc79891cf2 on port 18 Feb 23 16:33:19 ip-10-0-136-68 NetworkManager[1147]: [1677169999.9432] manager: (2dd14dc79891cf2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47) Feb 23 16:33:19 ip-10-0-136-68 kernel: device 2dd14dc79891cf2 entered promiscuous mode Feb 23 16:33:19 ip-10-0-136-68 systemd-udevd[6124]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 16:33:19 ip-10-0-136-68 systemd-udevd[6124]: Could not generate persistent MAC address for d1ad2e51d68aca0: No such file or directory Feb 23 16:33:19 ip-10-0-136-68 NetworkManager[1147]: [1677169999.9948] manager: (d1ad2e51d68aca0): new Veth device (/org/freedesktop/NetworkManager/Devices/48) Feb 23 16:33:19 ip-10-0-136-68 NetworkManager[1147]: [1677169999.9960] device (d1ad2e51d68aca0): carrier: link connected Feb 23 16:33:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): d1ad2e51d68aca0: link is not ready Feb 23 16:33:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): d1ad2e51d68aca0: link becomes ready Feb 23 16:33:20 ip-10-0-136-68 NetworkManager[1147]: [1677170000.0366] manager: (d1ad2e51d68aca0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49) Feb 23 16:33:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00083|bridge|INFO|bridge br-int: added interface d1ad2e51d68aca0 on port 19 Feb 23 16:33:20 ip-10-0-136-68 kernel: device d1ad2e51d68aca0 entered promiscuous mode Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:20.058942 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: I0223 16:33:19.894282 6080 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: 2023-02-23T16:33:20Z [verbose] Add: openshift-monitoring:alertmanager-main-1:457a2ca9-5414-414b-8731-42d2430a3275:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2dd14dc79891cf2","mac":"e2:a4:b6:f9:ef:8f"},{"name":"eth0","mac":"0a:58:0a:81:02:14","sandbox":"/var/run/netns/13a1e099-392b-4597-85ad-d1b6663e05ea"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.20/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: I0223 16:33:20.036718 6064 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"alertmanager-main-1", UID:"457a2ca9-5414-414b-8731-42d2430a3275", APIVersion:"v1", ResourceVersion:"45626", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.20/23] from ovn-kubernetes Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.061735052Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 UID:457a2ca9-5414-414b-8731-42d2430a3275 NetNS:/var/run/netns/13a1e099-392b-4597-85ad-d1b6663e05ea Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.061855279Z" level=info msg="Checking pod openshift-monitoring_alertmanager-main-1 for CNI network multus-cni-network (type=multus)" Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:33:20.064265 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod457a2ca9_5414_414b_8731_42d2430a3275.slice/crio-2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608.scope WatchSource:0}: Error finding container 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608: Status 404 returned error can't find the container with id 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.065230877Z" level=info msg="Ran pod sandbox 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 with infra container: openshift-monitoring/alertmanager-main-1/POD" id=d035511f-5895-4db0-8a0d-75e526423937 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.066081712Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=076923b9-e88f-407a-b638-3c0a8888eeef name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.066256404Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=076923b9-e88f-407a-b638-3c0a8888eeef name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.082971429Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=1a2f919c-e409-4886-9bc0-8a9218da2293 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.083262989Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1a2f919c-e409-4886-9bc0-8a9218da2293 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.084445626Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager" id=4ec5cc03-e5d5-44a7-a96e-61435b5f6d82 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.084554610Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: I0223 16:33:19.983138 6097 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: 2023-02-23T16:33:20Z [verbose] Add: openshift-monitoring:prometheus-k8s-1:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d1ad2e51d68aca0","mac":"aa:f1:d8:7b:20:7e"},{"name":"eth0","mac":"0a:58:0a:81:02:15","sandbox":"/var/run/netns/a6d016c1-2d14-4c02-896b-9581b100a834"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.21/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: I0223 16:33:20.106445 6089 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"prometheus-k8s-1", UID:"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c", APIVersion:"v1", ResourceVersion:"45636", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.21/23] from ovn-kubernetes Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:20.130093 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/prometheus-k8s-1] Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.130296108Z" level=info msg="Got pod network &{Name:prometheus-k8s-1 Namespace:openshift-monitoring ID:d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f UID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c NetNS:/var/run/netns/a6d016c1-2d14-4c02-896b-9581b100a834 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.130425437Z" level=info msg="Checking pod openshift-monitoring_prometheus-k8s-1 for CNI network multus-cni-network (type=multus)" Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started crio-conmon-5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98.scope. Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:33:20.133188 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde160b09_a82e_4c1c_855b_4dfb3b3cbd7c.slice/crio-d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f.scope WatchSource:0}: Error finding container d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f: Status 404 returned error can't find the container with id d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.135394004Z" level=info msg="Ran pod sandbox d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f with infra container: openshift-monitoring/prometheus-k8s-1/POD" id=ddbefc96-8fab-4f92-86c9-7f41e772f584 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.136164540Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=fc86d3e0-dfd7-4d7b-aea0-09d22a33a0d0 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.136459167Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=fc86d3e0-dfd7-4d7b-aea0-09d22a33a0d0 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.137196614Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=77b4ed3d-8c78-4d8b-89e2-96e9ccf9698e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.137361942Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=77b4ed3d-8c78-4d8b-89e2-96e9ccf9698e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.138236446Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-1/init-config-reloader" id=528e4d2a-dd91-4f19-a6c0-0ecef3ed35b3 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.138333919Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: run-runc-5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98-runc.A1h6SZ.mount: Succeeded. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started libcontainer container 5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started crio-conmon-5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2.scope. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started libcontainer container 5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2. Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.222186813Z" level=info msg="Created container 5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98: openshift-monitoring/alertmanager-main-1/alertmanager" id=4ec5cc03-e5d5-44a7-a96e-61435b5f6d82 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.222616092Z" level=info msg="Starting container: 5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98" id=d1e0be28-6fa0-45ea-8e68-bce1a971a07e name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.233779537Z" level=info msg="Started container" PID=6157 containerID=5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98 description=openshift-monitoring/alertmanager-main-1/alertmanager id=d1e0be28-6fa0-45ea-8e68-bce1a971a07e name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.242610812Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=030fa64d-2666-4f39-991f-4f8bcc8d5f9e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.242899191Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=030fa64d-2666-4f39-991f-4f8bcc8d5f9e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.245282171Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=fa8f41b6-c788-48cb-9a0a-69058c38203d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.245505861Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=fa8f41b6-c788-48cb-9a0a-69058c38203d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.246405365Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/config-reloader" id=849a6f90-7845-4d67-ba10-62cc28d59052 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.246480474Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.246823939Z" level=info msg="Created container 5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2: openshift-monitoring/prometheus-k8s-1/init-config-reloader" id=528e4d2a-dd91-4f19-a6c0-0ecef3ed35b3 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.247097812Z" level=info msg="Starting container: 5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2" id=f0bc8da9-8caa-497e-b46a-b4a413d28f69 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.258174070Z" level=info msg="Started container" PID=6175 containerID=5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2 description=openshift-monitoring/prometheus-k8s-1/init-config-reloader id=f0bc8da9-8caa-497e-b46a-b4a413d28f69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started crio-conmon-c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73.scope. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started libcontainer container c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: crio-5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2.scope: Succeeded. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: crio-5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2.scope: Consumed 32ms CPU time Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: crio-conmon-5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2.scope: Succeeded. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: crio-conmon-5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2.scope: Consumed 24ms CPU time Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:20.297449 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerStarted Data:5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98} Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:20.297486 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerStarted Data:2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608} Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:20.298497 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerStarted Data:5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2} Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:20.298523 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerStarted Data:d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f} Feb 23 16:33:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:20.300813 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.360507808Z" level=info msg="Created container c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73: openshift-monitoring/alertmanager-main-1/config-reloader" id=849a6f90-7845-4d67-ba10-62cc28d59052 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.360896661Z" level=info msg="Starting container: c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73" id=5cd72aaa-2c10-48e2-ad5f-191619ebf6b1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.367979025Z" level=info msg="Started container" PID=6251 containerID=c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73 description=openshift-monitoring/alertmanager-main-1/config-reloader id=5cd72aaa-2c10-48e2-ad5f-191619ebf6b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.375807625Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=8c64c2dc-6adb-47d4-aaaf-ed0ab0e14421 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.375971636Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8c64c2dc-6adb-47d4-aaaf-ed0ab0e14421 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.376518978Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=149cd1f9-11db-4885-b628-8f55d751a18d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.376650586Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=149cd1f9-11db-4885-b628-8f55d751a18d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.377355924Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=1e7084cf-5278-4f3c-b4ce-179d14b8f691 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.377451533Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started crio-conmon-b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428.scope. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started libcontainer container b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428. Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.467283998Z" level=info msg="Created container b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=1e7084cf-5278-4f3c-b4ce-179d14b8f691 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.467755103Z" level=info msg="Starting container: b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428" id=48e42081-5275-487f-8811-abedaef59109 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.476834327Z" level=info msg="Started container" PID=6299 containerID=b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428 description=openshift-monitoring/alertmanager-main-1/alertmanager-proxy id=48e42081-5275-487f-8811-abedaef59109 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.491407969Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=07f7b05c-f931-48ff-a1c4-428460103be9 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.491595987Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=07f7b05c-f931-48ff-a1c4-428460103be9 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.492254916Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=9eccdd32-54d1-4241-90a3-260e5fdf907f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.492408946Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9eccdd32-54d1-4241-90a3-260e5fdf907f name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.493625476Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=08e5b183-4751-435c-95b2-72a65ba76210 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.493925933Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started crio-conmon-9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9.scope. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started libcontainer container 9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9. Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.685503588Z" level=info msg="Created container 9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=08e5b183-4751-435c-95b2-72a65ba76210 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.686136986Z" level=info msg="Starting container: 9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9" id=e8cb2e67-5873-4dbd-9888-7d72f9916006 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.764463661Z" level=info msg="Started container" PID=6350 containerID=9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9 description=openshift-monitoring/alertmanager-main-1/kube-rbac-proxy id=e8cb2e67-5873-4dbd-9888-7d72f9916006 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.798763653Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=118045e4-27ac-4360-9ed8-6e62d3a873e3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.798996787Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=118045e4-27ac-4360-9ed8-6e62d3a873e3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.807393908Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=e6c07138-a7f5-4fb7-b7e9-72174ea45ecd name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.807597265Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e6c07138-a7f5-4fb7-b7e9-72174ea45ecd name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.809094223Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=8efc759d-cbc7-4e73-8e3a-9ee4e7d51d11 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.809212708Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started crio-conmon-453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02.scope. Feb 23 16:33:20 ip-10-0-136-68 systemd[1]: Started libcontainer container 453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02. Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.983974108Z" level=info msg="Created container 453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=8efc759d-cbc7-4e73-8e3a-9ee4e7d51d11 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:20.993408179Z" level=info msg="Starting container: 453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02" id=fcaddacb-d640-4efa-b0d3-b4f4fdc3e6c3 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.016756994Z" level=info msg="Started container" PID=6392 containerID=453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02 description=openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric id=fcaddacb-d640-4efa-b0d3-b4f4fdc3e6c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.060580489Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=e52c00b5-ca06-4fbf-89f0-ff8de26b1c1e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.061926295Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e52c00b5-ca06-4fbf-89f0-ff8de26b1c1e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.084298692Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=54456f2f-1df2-4e92-b720-c13a7b6d956e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.084957723Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=54456f2f-1df2-4e92-b720-c13a7b6d956e name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:21 ip-10-0-136-68 conmon[6144]: conmon 5326dfbd58af8cae61b1 : container 6157 exited with status 1 Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.115873849Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=28500017-afc9-4652-97db-c10773e1839e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.116012155Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:21 ip-10-0-136-68 systemd[1]: crio-5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98.scope: Succeeded. Feb 23 16:33:21 ip-10-0-136-68 systemd[1]: crio-5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98.scope: Consumed 96ms CPU time Feb 23 16:33:21 ip-10-0-136-68 systemd[1]: crio-conmon-5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98.scope: Succeeded. Feb 23 16:33:21 ip-10-0-136-68 systemd[1]: crio-conmon-5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98.scope: Consumed 27ms CPU time Feb 23 16:33:21 ip-10-0-136-68 systemd[1]: Started crio-conmon-f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173.scope. Feb 23 16:33:21 ip-10-0-136-68 systemd[1]: run-runc-f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173-runc.ERC3FB.mount: Succeeded. Feb 23 16:33:21 ip-10-0-136-68 systemd[1]: Started libcontainer container f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173. Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.276786966Z" level=info msg="Created container f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=28500017-afc9-4652-97db-c10773e1839e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.277227966Z" level=info msg="Starting container: f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173" id=8abbc32d-7cd1-46c3-90a2-92f6c1f49e2b name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.286547512Z" level=info msg="Started container" PID=6455 containerID=f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173 description=openshift-monitoring/alertmanager-main-1/prom-label-proxy id=8abbc32d-7cd1-46c3-90a2-92f6c1f49e2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.306863 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager/0.log" Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.306912 2112 generic.go:296] "Generic (PLEG): container finished" podID=457a2ca9-5414-414b-8731-42d2430a3275 containerID="5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98" exitCode=1 Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.307469 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerDied Data:5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98} Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.307500 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerStarted Data:453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02} Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.307602 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerStarted Data:9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9} Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.307617 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerStarted Data:b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428} Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.307631 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerStarted Data:c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73} Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.308342 2112 generic.go:296] "Generic (PLEG): container finished" podID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerID="5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2" exitCode=0 Feb 23 16:33:21 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:21.308428 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerDied Data:5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2} Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.311002319Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434" id=5863db85-cd1b-475b-89c3-c74ccb1e4a66 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.311193562Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7d6a7a794d1c53f9801c5c0cd31acc0bbeac302f72326d692b09c25b56dec99d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434],Size_:466962930,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5863db85-cd1b-475b-89c3-c74ccb1e4a66 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.324125054Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434" id=5f3fda31-03de-4c3b-8a88-e8b9927ef38c name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.324309216Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7d6a7a794d1c53f9801c5c0cd31acc0bbeac302f72326d692b09c25b56dec99d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec4febdf3d180251e9d97c5039560c32d0513739dc469fa684af2fa5fe4de434],Size_:466962930,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5f3fda31-03de-4c3b-8a88-e8b9927ef38c name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.327065110Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-1/prometheus" id=94f11b72-67ba-4368-9f3a-810231cc574e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:21 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:21.327168532Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82.scope. Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.164515797Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7485774b6731351b7b283e72efb0a0c07d69623ac09616ba89f52466b6fec053" id=1cd94c1b-9552-4dc6-8d33-630ce28b168b name=/runtime.v1.ImageService/PullImage Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.167159635Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d617ec9a2e82f2b2bf2dcab9695d49426db17674bf970d4b1dc146d66db863b" id=99f307d3-06bf-4fa4-b94f-ef5d77411bb2 name=/runtime.v1.ImageService/PullImage Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.168140695Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7485774b6731351b7b283e72efb0a0c07d69623ac09616ba89f52466b6fec053" id=85928cc6-0e87-482e-90ce-c24f77503763 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.168384141Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d617ec9a2e82f2b2bf2dcab9695d49426db17674bf970d4b1dc146d66db863b" id=b8ced1f4-c1f8-41ae-8a32-6ff3cd6225be name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.170231057Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ae0ac61388dd919e04200b7f993bdf92c2f7f26d96e217b07028dfe605f27b70,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d617ec9a2e82f2b2bf2dcab9695d49426db17674bf970d4b1dc146d66db863b],Size_:431882490,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b8ced1f4-c1f8-41ae-8a32-6ff3cd6225be name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.170509847Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3a34a2a3407d0fdec462cde934ba7e2c5ec0f13ddaf8fdf39423deb4012766d0,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7485774b6731351b7b283e72efb0a0c07d69623ac09616ba89f52466b6fec053],Size_:415774243,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=85928cc6-0e87-482e-90ce-c24f77503763 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.171757477Z" level=info msg="Creating container: openshift-monitoring/prometheus-adapter-849c9bc779-55gw7/prometheus-adapter" id=97d928d6-05df-4699-83cc-3e2142a7cd4b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.171875552Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.174787794Z" level=info msg="Creating container: openshift-ingress/router-default-77f788594f-j5twb/router" id=a8626a25-0f09-44bc-ab8d-7011eddb2ad5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.174878432Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started libcontainer container ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82. Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2.scope. Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4.scope. Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started libcontainer container 6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2. Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started libcontainer container aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4. Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.257064974Z" level=info msg="Created container ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82: openshift-monitoring/prometheus-k8s-1/prometheus" id=94f11b72-67ba-4368-9f3a-810231cc574e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.257460931Z" level=info msg="Starting container: ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82" id=a753cb6d-df45-4fa0-929c-992bb1dad077 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.266557428Z" level=info msg="Started container" PID=6532 containerID=ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82 description=openshift-monitoring/prometheus-k8s-1/prometheus id=a753cb6d-df45-4fa0-929c-992bb1dad077 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.275553367Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=3b02314a-0233-48ce-a084-60691f177f8d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.275778303Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3b02314a-0233-48ce-a084-60691f177f8d name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.276355699Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=e40e0c67-7b73-41f2-968f-f6839d867dbb name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.276493532Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e40e0c67-7b73-41f2-968f-f6839d867dbb name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.277211846Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-1/config-reloader" id=72da135f-3bf4-4283-9a3d-75b151c7dc48 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.277295892Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.297134535Z" level=info msg="Created container aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4: openshift-ingress/router-default-77f788594f-j5twb/router" id=a8626a25-0f09-44bc-ab8d-7011eddb2ad5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.297481310Z" level=info msg="Starting container: aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4" id=0adb6fef-fc46-41b7-a41f-4adc2f6fba56 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98.scope. Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.314288638Z" level=info msg="Started container" PID=6569 containerID=aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4 description=openshift-ingress/router-default-77f788594f-j5twb/router id=0adb6fef-fc46-41b7-a41f-4adc2f6fba56 name=/runtime.v1.RuntimeService/StartContainer sandboxID=259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0 Feb 23 16:33:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:22.325832 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager/0.log" Feb 23 16:33:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:22.325913 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerStarted Data:f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173} Feb 23 16:33:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:22.326278 2112 scope.go:115] "RemoveContainer" containerID="5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98" Feb 23 16:33:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:22.327882 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerStarted Data:ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82} Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.328620643Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=78e481c9-31a8-41ce-a098-a0f4799d1ec6 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.330272518Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=78e481c9-31a8-41ce-a098-a0f4799d1ec6 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.331516661Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=e7a8b963-082e-4262-95bc-d445586ff379 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.331818739Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e7a8b963-082e-4262-95bc-d445586ff379 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.332355797Z" level=info msg="Created container 6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2: openshift-monitoring/prometheus-adapter-849c9bc779-55gw7/prometheus-adapter" id=97d928d6-05df-4699-83cc-3e2142a7cd4b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.332802066Z" level=info msg="Starting container: 6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2" id=c7d882dd-f549-4fc3-a78b-4825502f6d36 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.333171657Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager" id=7ddff90a-497f-40a5-ae10-c5ebcc567340 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.333266068Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started libcontainer container 5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98. Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.347979073Z" level=info msg="Started container" PID=6560 containerID=6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2 description=openshift-monitoring/prometheus-adapter-849c9bc779-55gw7/prometheus-adapter id=c7d882dd-f549-4fc3-a78b-4825502f6d36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24 Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955.scope. Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started libcontainer container 3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955. Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.464067898Z" level=info msg="Created container 5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98: openshift-monitoring/prometheus-k8s-1/config-reloader" id=72da135f-3bf4-4283-9a3d-75b151c7dc48 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.464527910Z" level=info msg="Starting container: 5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98" id=683a86ac-333a-4bce-a498-a6985de5f382 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.476052157Z" level=info msg="Started container" PID=6651 containerID=5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98 description=openshift-monitoring/prometheus-k8s-1/config-reloader id=683a86ac-333a-4bce-a498-a6985de5f382 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.491757982Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=da683e3a-f06b-446a-b862-33e154657a3b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.491990433Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=da683e3a-f06b-446a-b862-33e154657a3b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.493383451Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec" id=49a71ac0-c77e-4067-9e2f-d21cb8ad84bf name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.493562817Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e0949c572f36eadc2058a4a75e85ef222e1a401c4ecc7fd34e193cad494cab5,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:927fcfe37d62457bfd0932c16a51342e303d9f92d19244b389bb08c30b1b5bec],Size_:426731013,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=49a71ac0-c77e-4067-9e2f-d21cb8ad84bf name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.494477125Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-1/thanos-sidecar" id=152fcc03-9d42-4f05-a9db-9ae1cda871fa name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.494578149Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998.scope. Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.532356350Z" level=info msg="Created container 3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955: openshift-monitoring/alertmanager-main-1/alertmanager" id=7ddff90a-497f-40a5-ae10-c5ebcc567340 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.533134099Z" level=info msg="Starting container: 3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955" id=701bd471-4da0-4aef-99bb-fbba3266dc9b name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.544504215Z" level=info msg="Started container" PID=6683 containerID=3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955 description=openshift-monitoring/alertmanager-main-1/alertmanager id=701bd471-4da0-4aef-99bb-fbba3266dc9b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started libcontainer container 6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998. Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.673780228Z" level=info msg="Created container 6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998: openshift-monitoring/prometheus-k8s-1/thanos-sidecar" id=152fcc03-9d42-4f05-a9db-9ae1cda871fa name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.674327101Z" level=info msg="Starting container: 6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998" id=368ca2b6-2393-4db9-9872-af93bf178b05 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.691438628Z" level=info msg="Started container" PID=6765 containerID=6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998 description=openshift-monitoring/prometheus-k8s-1/thanos-sidecar id=368ca2b6-2393-4db9-9872-af93bf178b05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.717322514Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=84e7105f-0a67-4207-97d7-d6d57e7c9bd7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.717523031Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=84e7105f-0a67-4207-97d7-d6d57e7c9bd7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.718450158Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=1829fc2e-06e2-4ed4-ae17-0280f55cbccf name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.718609531Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1829fc2e-06e2-4ed4-ae17-0280f55cbccf name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.719640624Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-1/prometheus-proxy" id=ee53bc26-5e13-484f-a938-db1782579307 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.719807769Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2.scope. Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started libcontainer container ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2. Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.879770453Z" level=info msg="Created container ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2: openshift-monitoring/prometheus-k8s-1/prometheus-proxy" id=ee53bc26-5e13-484f-a938-db1782579307 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.880796148Z" level=info msg="Starting container: ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2" id=06226e99-e923-4201-93a3-51491a3ff625 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.896505536Z" level=info msg="Started container" PID=6817 containerID=ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2 description=openshift-monitoring/prometheus-k8s-1/prometheus-proxy id=06226e99-e923-4201-93a3-51491a3ff625 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.909972955Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=3bf8c9bd-a3ef-4e37-8d4a-c7a54471d859 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.910179400Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3bf8c9bd-a3ef-4e37-8d4a-c7a54471d859 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.910944533Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=e9624f60-20ce-4e9c-9383-11fcd9418070 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.911166843Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e9624f60-20ce-4e9c-9383-11fcd9418070 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.912104872Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy" id=93026276-ffa0-41a5-8578-cec1d84ac4cc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:22.912220668Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19.scope. Feb 23 16:33:22 ip-10-0-136-68 systemd[1]: Started libcontainer container 542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19. Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.044493734Z" level=info msg="Created container 542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19: openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy" id=93026276-ffa0-41a5-8578-cec1d84ac4cc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.045795085Z" level=info msg="Starting container: 542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19" id=dfd2ccc4-636a-4a9a-8fd6-4305b9660b35 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.055236007Z" level=info msg="Started container" PID=6863 containerID=542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19 description=openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy id=dfd2ccc4-636a-4a9a-8fd6-4305b9660b35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.066002736Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=bbf2b356-9c62-4de2-a9c3-755f73d97624 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.066190889Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bbf2b356-9c62-4de2-a9c3-755f73d97624 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.066932037Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=56c22318-e860-477b-838b-d493d511500b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.067099668Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=56c22318-e860-477b-838b-d493d511500b name=/runtime.v1.ImageService/ImageStatus Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.068057409Z" level=info msg="Creating container: openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-thanos" id=aafba2b1-5a67-442a-a636-cd9c911b0080 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.068166264Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:33:23 ip-10-0-136-68 systemd[1]: Started crio-conmon-b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915.scope. Feb 23 16:33:23 ip-10-0-136-68 systemd[1]: Started libcontainer container b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915. Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.219896902Z" level=info msg="Created container b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915: openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-thanos" id=aafba2b1-5a67-442a-a636-cd9c911b0080 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.220365834Z" level=info msg="Starting container: b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915" id=b2eb553b-490b-4181-8a90-b77da862bd90 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:23.227116337Z" level=info msg="Started container" PID=6908 containerID=b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915 description=openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-thanos id=b2eb553b-490b-4181-8a90-b77da862bd90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.330468 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" event=&{ID:bac56f54-5b00-421f-b735-a8a998208173 Type:ContainerStarted Data:6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2} Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.330594 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.331529 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-77f788594f-j5twb" event=&{ID:e7ec9547-ee4c-4966-997f-719d78dcc31b Type:ContainerStarted Data:aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4} Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.333600 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager/0.log" Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.333684 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerStarted Data:3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955} Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.335823 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerStarted Data:b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915} Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.335849 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerStarted Data:542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19} Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.335863 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerStarted Data:ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2} Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.335876 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerStarted Data:6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998} Feb 23 16:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:23.335889 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerStarted Data:5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98} Feb 23 16:33:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00084|connmgr|INFO|br-ex<->unix#22: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:24.274073 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:24.276612 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:24.337445 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:24.340118 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 16:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:24.796455 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:27.303965 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" Feb 23 16:33:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:27.612363 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-b2mxx" Feb 23 16:33:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:29.718599 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:29 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.T5hfsV.mount: Succeeded. Feb 23 16:33:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:34.797000 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:34.834398 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:35 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:35.404315 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-1" Feb 23 16:33:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:37.308120 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" Feb 23 16:33:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00085|connmgr|INFO|br-ex<->unix#26: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:33:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:39.745559779Z" level=warning msg="Found defunct process with PID 4066 (pool)" Feb 23 16:33:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:39.746423097Z" level=warning msg="Found defunct process with PID 6726 (haproxy)" Feb 23 16:33:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:39.746562348Z" level=warning msg="Found defunct process with PID 7009 (haproxy)" Feb 23 16:33:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.O8Sy5P.mount: Succeeded. Feb 23 16:33:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.voDMWA.mount: Succeeded. Feb 23 16:33:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ALHdRU.mount: Succeeded. Feb 23 16:33:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:33:47.734589988Z" level=warning msg="Found defunct process with PID 4066 (pool)" Feb 23 16:33:49 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.ACbLN2.mount: Succeeded. Feb 23 16:33:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:49.757950 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/alertmanager-main-1" Feb 23 16:33:50 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.XJwG7e.mount: Succeeded. Feb 23 16:33:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:33:52.274619 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" Feb 23 16:33:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00086|connmgr|INFO|br-ex<->unix#33: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:33:55 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.8ZvFyK.mount: Succeeded. Feb 23 16:33:58 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00087|connmgr|INFO|br-int<->unix#2: 554 flow_mods in the 45 s starting 51 s ago (390 adds, 164 deletes) Feb 23 16:33:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.EWPFfY.mount: Succeeded. Feb 23 16:33:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.fKDDQU.mount: Succeeded. Feb 23 16:34:01 ip-10-0-136-68 rpm-ostree[2896]: In idle state; will auto-exit in 64 seconds Feb 23 16:34:01 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Succeeded. Feb 23 16:34:01 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Consumed 1.834s CPU time Feb 23 16:34:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00088|connmgr|INFO|br-ex<->unix#39: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:34:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.U5VcAD.mount: Succeeded. Feb 23 16:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:34:17.736212006Z" level=warning msg="Found defunct process with PID 7151 (haproxy)" Feb 23 16:34:19 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.IMkFBc.mount: Succeeded. Feb 23 16:34:20 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.563oHJ.mount: Succeeded. Feb 23 16:34:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00089|connmgr|INFO|br-ex<->unix#46: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:34:24 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.BtY6GC.mount: Succeeded. Feb 23 16:34:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00090|connmgr|INFO|br-ex<->unix#52: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:34:45 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.m4l16r.mount: Succeeded. Feb 23 16:34:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00091|connmgr|INFO|br-ex<->unix#61: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:34:54 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.kihxp4.mount: Succeeded. Feb 23 16:34:55 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.tOfY5w.mount: Succeeded. Feb 23 16:34:58 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00092|connmgr|INFO|br-int<->unix#2: 3 flow_mods in the 13 s starting 36 s ago (1 adds, 2 deletes) Feb 23 16:34:59 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.Ys0Nfx.mount: Succeeded. Feb 23 16:34:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.DTGsIQ.mount: Succeeded. Feb 23 16:35:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.6oeaKJ.mount: Succeeded. Feb 23 16:35:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.91gwMU.mount: Succeeded. Feb 23 16:35:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00093|connmgr|INFO|br-ex<->unix#65: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:35:09 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.GIEWUT.mount: Succeeded. Feb 23 16:35:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.76iOpD.mount: Succeeded. Feb 23 16:35:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.v8efFj.mount: Succeeded. Feb 23 16:35:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.IRrAZr.mount: Succeeded. Feb 23 16:35:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.4IseiX.mount: Succeeded. Feb 23 16:35:20 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.SpUVoM.mount: Succeeded. Feb 23 16:35:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00094|connmgr|INFO|br-ex<->unix#73: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:35:24 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ObHEGW.mount: Succeeded. Feb 23 16:35:25 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.MjcVVj.mount: Succeeded. Feb 23 16:35:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00095|connmgr|INFO|br-ex<->unix#78: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:35:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.dvfE7I.mount: Succeeded. Feb 23 16:35:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.bORGrt.mount: Succeeded. Feb 23 16:35:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.dqpJ1O.mount: Succeeded. Feb 23 16:35:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.SOM4wU.mount: Succeeded. Feb 23 16:35:49 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.zd1bBi.mount: Succeeded. Feb 23 16:35:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00096|connmgr|INFO|br-ex<->unix#86: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:35:55 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.kMJHjm.mount: Succeeded. Feb 23 16:35:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.yS9ikx.mount: Succeeded. Feb 23 16:36:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00097|connmgr|INFO|br-ex<->unix#91: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:36:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.JB3fdC.mount: Succeeded. Feb 23 16:36:19 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ilt964.mount: Succeeded. Feb 23 16:36:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00098|connmgr|INFO|br-ex<->unix#100: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:36:24 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.Z95t3f.mount: Succeeded. Feb 23 16:36:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00099|connmgr|INFO|br-ex<->unix#104: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:36:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00100|connmgr|INFO|br-int<->unix#2: 31 flow_mods in the 9 s starting 10 s ago (14 adds, 17 deletes) Feb 23 16:36:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:36:39.747174318Z" level=warning msg="Found defunct process with PID 7492 (haproxy)" Feb 23 16:36:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.04wbbi.mount: Succeeded. Feb 23 16:36:49 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.c3IG5V.mount: Succeeded. Feb 23 16:36:49 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.vOsdvK.mount: Succeeded. Feb 23 16:36:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00101|connmgr|INFO|br-ex<->unix#112: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:37:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.DrgtHt.mount: Succeeded. Feb 23 16:37:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.4BQ21s.mount: Succeeded. Feb 23 16:37:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00102|connmgr|INFO|br-ex<->unix#117: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:37:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.JskPfS.mount: Succeeded. Feb 23 16:37:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.pmY3Qe.mount: Succeeded. Feb 23 16:37:14 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.RDhkKX.mount: Succeeded. Feb 23 16:37:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ENiivA.mount: Succeeded. Feb 23 16:37:19 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.uB2Rtl.mount: Succeeded. Feb 23 16:37:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00103|connmgr|INFO|br-ex<->unix#124: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:37:24 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.ezyQ72.mount: Succeeded. Feb 23 16:37:24 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.6RIucr.mount: Succeeded. Feb 23 16:37:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00104|connmgr|INFO|br-ex<->unix#130: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:37:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00105|connmgr|INFO|br-int<->unix#2: 240 flow_mods in the 48 s starting 53 s ago (115 adds, 125 deletes) Feb 23 16:37:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.oedPPT.mount: Succeeded. Feb 23 16:37:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:37:45.883605 2112 kubelet.go:1343] "Image garbage collection succeeded" Feb 23 16:37:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:37:45.990819663Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=0ee9ff4c-5b05-4d3c-9043-b99a5a8d1e73 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:37:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:37:45.990977643Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0ee9ff4c-5b05-4d3c-9043-b99a5a8d1e73 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:37:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:37:47.737983183Z" level=warning msg="Found defunct process with PID 12170 (haproxy)" Feb 23 16:37:49 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.cOTMpI.mount: Succeeded. Feb 23 16:37:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00106|connmgr|INFO|br-ex<->unix#139: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:38:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00107|connmgr|INFO|br-ex<->unix#144: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:38:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:38:09.746986347Z" level=warning msg="Found defunct process with PID 11872 (haproxy)" Feb 23 16:38:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.zrK40T.mount: Succeeded. Feb 23 16:38:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:38:17.735832075Z" level=warning msg="Found defunct process with PID 11567 (haproxy)" Feb 23 16:38:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00108|connmgr|INFO|br-ex<->unix#152: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:38:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00109|connmgr|INFO|br-ex<->unix#157: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:38:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00110|connmgr|INFO|br-int<->unix#2: 33 flow_mods in the 43 s starting 58 s ago (24 adds, 9 deletes) Feb 23 16:38:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00111|connmgr|INFO|br-ex<->unix#165: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:38:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ZKIVQ3.mount: Succeeded. Feb 23 16:39:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.z7ftmd.mount: Succeeded. Feb 23 16:39:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.3JV4qM.mount: Succeeded. Feb 23 16:39:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00112|connmgr|INFO|br-ex<->unix#170: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:39:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.fnLWsa.mount: Succeeded. Feb 23 16:39:19 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.0I7pSi.mount: Succeeded. Feb 23 16:39:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00113|connmgr|INFO|br-ex<->unix#177: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:39:24 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.uM2XBZ.mount: Succeeded. Feb 23 16:39:25 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.qX1MPz.mount: Succeeded. Feb 23 16:39:34 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.xiLvBE.mount: Succeeded. Feb 23 16:39:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00114|connmgr|INFO|br-ex<->unix#183: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:39:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.JEpAw9.mount: Succeeded. Feb 23 16:39:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00115|connmgr|INFO|br-ex<->unix#190: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:39:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.pV0RBT.mount: Succeeded. Feb 23 16:40:04 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.WYunr0.mount: Succeeded. Feb 23 16:40:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.mzvxJM.mount: Succeeded. Feb 23 16:40:05 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.hoUi4z.mount: Succeeded. Feb 23 16:40:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00116|connmgr|INFO|br-ex<->unix#196: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:40:09 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.qORzem.mount: Succeeded. Feb 23 16:40:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.2P6hi0.mount: Succeeded. Feb 23 16:40:14 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.8MfxhE.mount: Succeeded. Feb 23 16:40:19 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.B2A24y.mount: Succeeded. Feb 23 16:40:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00117|connmgr|INFO|br-ex<->unix#204: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:40:24 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.i979co.mount: Succeeded. Feb 23 16:40:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00118|connmgr|INFO|br-int<->unix#2: 26 flow_mods 10 s ago (11 adds, 15 deletes) Feb 23 16:40:34 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.4rvKkc.mount: Succeeded. Feb 23 16:40:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00119|connmgr|INFO|br-ex<->unix#209: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:40:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.rUqH1D.mount: Succeeded. Feb 23 16:40:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.2EBpSF.mount: Succeeded. Feb 23 16:40:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.lTysci.mount: Succeeded. Feb 23 16:40:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00120|connmgr|INFO|br-ex<->unix#215: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:40:55 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.vun4DF.mount: Succeeded. Feb 23 16:40:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ihuRwk.mount: Succeeded. Feb 23 16:41:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00121|connmgr|INFO|br-ex<->unix#222: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:41:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.YKD0jH.mount: Succeeded. Feb 23 16:41:19 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.vifdm3.mount: Succeeded. Feb 23 16:41:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00122|connmgr|INFO|br-ex<->unix#230: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:41:29 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.Tex6lU.mount: Succeeded. Feb 23 16:41:29 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.9Km08m.mount: Succeeded. Feb 23 16:41:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00123|connmgr|INFO|br-int<->unix#2: 27 flow_mods in the 46 s starting 56 s ago (16 adds, 11 deletes) Feb 23 16:41:34 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.yL7x26.mount: Succeeded. Feb 23 16:41:34 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.LK268j.mount: Succeeded. Feb 23 16:41:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00124|connmgr|INFO|br-ex<->unix#235: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:41:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.WbpI2r.mount: Succeeded. Feb 23 16:41:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00125|connmgr|INFO|br-ex<->unix#243: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:42:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.p7yZ3i.mount: Succeeded. Feb 23 16:42:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00126|connmgr|INFO|br-ex<->unix#248: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:42:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.cHKEj2.mount: Succeeded. Feb 23 16:42:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.FItNA3.mount: Succeeded. Feb 23 16:42:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00127|connmgr|INFO|br-ex<->unix#255: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:42:25 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.FAxWnZ.mount: Succeeded. Feb 23 16:42:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00128|connmgr|INFO|br-int<->unix#2: 1 flow_mods 56 s ago (1 deletes) Feb 23 16:42:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00129|connmgr|INFO|br-ex<->unix#261: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:42:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.fJREEv.mount: Succeeded. Feb 23 16:42:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.tAw4GI.mount: Succeeded. Feb 23 16:42:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:45.993832434Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=f7af7eca-7064-4e75-9559-096fb9eaa606 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:42:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:45.994041914Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f7af7eca-7064-4e75-9559-096fb9eaa606 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.574826 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug] Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.574865 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 16:42:47 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-besteffort-pod7db4b92a_156c_43bb_8301_4987d2527f68.slice. Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.707238 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwd7\" (UniqueName: \"kubernetes.io/projected/7db4b92a-156c-43bb-8301-4987d2527f68-kube-api-access-rcwd7\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"7db4b92a-156c-43bb-8301-4987d2527f68\") " pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.707286 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7db4b92a-156c-43bb-8301-4987d2527f68-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"7db4b92a-156c-43bb-8301-4987d2527f68\") " pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.807691 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwd7\" (UniqueName: \"kubernetes.io/projected/7db4b92a-156c-43bb-8301-4987d2527f68-kube-api-access-rcwd7\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"7db4b92a-156c-43bb-8301-4987d2527f68\") " pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.807758 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7db4b92a-156c-43bb-8301-4987d2527f68-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"7db4b92a-156c-43bb-8301-4987d2527f68\") " pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.807838 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7db4b92a-156c-43bb-8301-4987d2527f68-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"7db4b92a-156c-43bb-8301-4987d2527f68\") " pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.827223 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwd7\" (UniqueName: \"kubernetes.io/projected/7db4b92a-156c-43bb-8301-4987d2527f68-kube-api-access-rcwd7\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"7db4b92a-156c-43bb-8301-4987d2527f68\") " pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.889081 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 16:42:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:47.889492420Z" level=info msg="Running pod sandbox: openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug/POD" id=5b8b51dd-a909-48f2-88b5-e38d4360c716 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:42:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:47.889547511Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:42:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:47.904624668Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=5b8b51dd-a909-48f2-88b5-e38d4360c716 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 16:42:47.909038 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db4b92a_156c_43bb_8301_4987d2527f68.slice/crio-7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1.scope WatchSource:0}: Error finding container 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1: Status 404 returned error can't find the container with id 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1 Feb 23 16:42:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:47.916443470Z" level=info msg="Ran pod sandbox 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1 with infra container: openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug/POD" id=5b8b51dd-a909-48f2-88b5-e38d4360c716 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 16:42:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:47.917144132Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afa73a12a1ffd31f77b10a25c43a4d02b0fd62f927f6209c26983bd8aee021bf" id=3552a4ec-c3a4-4fcf-a529-bb14b309b8e7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:42:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:47.917326863Z" level=info msg="Image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afa73a12a1ffd31f77b10a25c43a4d02b0fd62f927f6209c26983bd8aee021bf not found" id=3552a4ec-c3a4-4fcf-a529-bb14b309b8e7 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:42:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:47.917608 2112 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 16:42:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:47.918088507Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afa73a12a1ffd31f77b10a25c43a4d02b0fd62f927f6209c26983bd8aee021bf" id=97b06cbe-e4b1-4b56-94ec-e9b9c37f5414 name=/runtime.v1.ImageService/PullImage Feb 23 16:42:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:47.920226596Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afa73a12a1ffd31f77b10a25c43a4d02b0fd62f927f6209c26983bd8aee021bf\"" Feb 23 16:42:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:48.465799 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:7db4b92a-156c-43bb-8301-4987d2527f68 Type:ContainerStarted Data:7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1} Feb 23 16:42:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:48.794526667Z" level=info msg="Trying to access \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afa73a12a1ffd31f77b10a25c43a4d02b0fd62f927f6209c26983bd8aee021bf\"" Feb 23 16:42:54 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00130|connmgr|INFO|br-ex<->unix#270: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:42:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:54.744924033Z" level=info msg="Pulled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afa73a12a1ffd31f77b10a25c43a4d02b0fd62f927f6209c26983bd8aee021bf" id=97b06cbe-e4b1-4b56-94ec-e9b9c37f5414 name=/runtime.v1.ImageService/PullImage Feb 23 16:42:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:54.745601514Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afa73a12a1ffd31f77b10a25c43a4d02b0fd62f927f6209c26983bd8aee021bf" id=d9f60055-024a-4f80-ad10-2b225bea70e8 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:42:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:54.747488570Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:826ae22fe8cca5dd1453773809623cd7c615015fcd2ea29338edb238488930b7,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afa73a12a1ffd31f77b10a25c43a4d02b0fd62f927f6209c26983bd8aee021bf],Size_:780572829,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d9f60055-024a-4f80-ad10-2b225bea70e8 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:42:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:54.748131918Z" level=info msg="Creating container: openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=182899e0-63d0-4559-bb60-31dd5fadd2ce name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:42:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:54.748223817Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 16:42:54 ip-10-0-136-68 systemd[1]: Started crio-conmon-7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060.scope. Feb 23 16:42:54 ip-10-0-136-68 systemd[1]: Started libcontainer container 7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060. Feb 23 16:42:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:54.840613943Z" level=info msg="Created container 7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060: openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=182899e0-63d0-4559-bb60-31dd5fadd2ce name=/runtime.v1.RuntimeService/CreateContainer Feb 23 16:42:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:54.840972567Z" level=info msg="Starting container: 7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060" id=f40d3962-9b03-4a63-8731-b475a1d1b417 name=/runtime.v1.RuntimeService/StartContainer Feb 23 16:42:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:54.848704247Z" level=info msg="Started container" PID=18514 containerID=7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060 description=openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug/container-00 id=f40d3962-9b03-4a63-8731-b475a1d1b417 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1 Feb 23 16:42:54 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 16:42:54 ip-10-0-136-68 rpm-ostree[18577]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 16:42:54 ip-10-0-136-68 rpm-ostree[18577]: In idle state; will auto-exit in 61 seconds Feb 23 16:42:54 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 16:42:54 ip-10-0-136-68 rpm-ostree[18577]: client(id:cli dbus:1.244 unit:crio-7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060.scope uid:0) added; new total=1 Feb 23 16:42:54 ip-10-0-136-68 rpm-ostree[18577]: client(id:cli dbus:1.244 unit:crio-7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060.scope uid:0) vanished; remaining=0 Feb 23 16:42:54 ip-10-0-136-68 rpm-ostree[18577]: In idle state; will auto-exit in 60 seconds Feb 23 16:42:55 ip-10-0-136-68 systemd[1]: crio-7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060.scope: Succeeded. Feb 23 16:42:55 ip-10-0-136-68 systemd[1]: crio-7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060.scope: Consumed 48ms CPU time Feb 23 16:42:55 ip-10-0-136-68 systemd[1]: crio-conmon-7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060.scope: Succeeded. Feb 23 16:42:55 ip-10-0-136-68 systemd[1]: crio-conmon-7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060.scope: Consumed 22ms CPU time Feb 23 16:42:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:55.482866 2112 generic.go:296] "Generic (PLEG): container finished" podID=7db4b92a-156c-43bb-8301-4987d2527f68 containerID="7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060" exitCode=0 Feb 23 16:42:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:55.482907 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:7db4b92a-156c-43bb-8301-4987d2527f68 Type:ContainerDied Data:7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060} Feb 23 16:42:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:55.771447 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug] Feb 23 16:42:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:55.776492 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug] Feb 23 16:42:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:56.484467 2112 status_manager.go:652] "Status for pod is up-to-date; skipping" podUID=7db4b92a-156c-43bb-8301-4987d2527f68 Feb 23 16:42:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:56.484589872Z" level=info msg="Stopping pod sandbox: 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" id=7733468c-ba4c-4d3e-9ece-c550eaac70d8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2a0b18118e542ba31a0cb7914508ed869fe1f13613eb55374e9c941565e3ca38-merged.mount: Succeeded. Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2a0b18118e542ba31a0cb7914508ed869fe1f13613eb55374e9c941565e3ca38-merged.mount: Consumed 0 CPU time Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: run-utsns-bfe99350\x2d5f62\x2d4aa1\x2d9856\x2db590be22b719.mount: Succeeded. Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: run-utsns-bfe99350\x2d5f62\x2d4aa1\x2d9856\x2db590be22b719.mount: Consumed 0 CPU time Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: run-ipcns-bfe99350\x2d5f62\x2d4aa1\x2d9856\x2db590be22b719.mount: Succeeded. Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: run-ipcns-bfe99350\x2d5f62\x2d4aa1\x2d9856\x2db590be22b719.mount: Consumed 0 CPU time Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: run-netns-bfe99350\x2d5f62\x2d4aa1\x2d9856\x2db590be22b719.mount: Succeeded. Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: run-netns-bfe99350\x2d5f62\x2d4aa1\x2d9856\x2db590be22b719.mount: Consumed 0 CPU time Feb 23 16:42:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:56.540772154Z" level=info msg="Stopped pod sandbox: 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" id=7733468c-ba4c-4d3e-9ece-c550eaac70d8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:42:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:56.681894 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7db4b92a-156c-43bb-8301-4987d2527f68-host\") pod \"7db4b92a-156c-43bb-8301-4987d2527f68\" (UID: \"7db4b92a-156c-43bb-8301-4987d2527f68\") " Feb 23 16:42:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:56.681936 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcwd7\" (UniqueName: \"kubernetes.io/projected/7db4b92a-156c-43bb-8301-4987d2527f68-kube-api-access-rcwd7\") pod \"7db4b92a-156c-43bb-8301-4987d2527f68\" (UID: \"7db4b92a-156c-43bb-8301-4987d2527f68\") " Feb 23 16:42:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:56.681990 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7db4b92a-156c-43bb-8301-4987d2527f68-host" (OuterVolumeSpecName: "host") pod "7db4b92a-156c-43bb-8301-4987d2527f68" (UID: "7db4b92a-156c-43bb-8301-4987d2527f68"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 16:42:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:56.682105 2112 reconciler.go:399] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7db4b92a-156c-43bb-8301-4987d2527f68-host\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:42:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:56.691092 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db4b92a-156c-43bb-8301-4987d2527f68-kube-api-access-rcwd7" (OuterVolumeSpecName: "kube-api-access-rcwd7") pod "7db4b92a-156c-43bb-8301-4987d2527f68" (UID: "7db4b92a-156c-43bb-8301-4987d2527f68"). InnerVolumeSpecName "kube-api-access-rcwd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7db4b92a\x2d156c\x2d43bb\x2d8301\x2d4987d2527f68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drcwd7.mount: Succeeded. Feb 23 16:42:56 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7db4b92a\x2d156c\x2d43bb\x2d8301\x2d4987d2527f68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drcwd7.mount: Consumed 0 CPU time Feb 23 16:42:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:56.782958 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-rcwd7\" (UniqueName: \"kubernetes.io/projected/7db4b92a-156c-43bb-8301-4987d2527f68-kube-api-access-rcwd7\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 16:42:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:57.489233 2112 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" Feb 23 16:42:57 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod7db4b92a_156c_43bb_8301_4987d2527f68.slice. Feb 23 16:42:57 ip-10-0-136-68 systemd[1]: kubepods-besteffort-pod7db4b92a_156c_43bb_8301_4987d2527f68.slice: Consumed 71ms CPU time Feb 23 16:42:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:57.498860 2112 status_manager.go:652] "Status for pod is up-to-date; skipping" podUID=7db4b92a-156c-43bb-8301-4987d2527f68 Feb 23 16:42:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:58.118358778Z" level=info msg="Stopping pod sandbox: 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" id=95f12ef9-a33c-45b2-aa01-b242c1242d59 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:42:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:42:58.118408509Z" level=info msg="Stopped pod sandbox (already stopped): 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" id=95f12ef9-a33c-45b2-aa01-b242c1242d59 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:42:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:42:58.119640 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7db4b92a-156c-43bb-8301-4987d2527f68 path="/var/lib/kubelet/pods/7db4b92a-156c-43bb-8301-4987d2527f68/volumes" Feb 23 16:42:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.9LMUOb.mount: Succeeded. Feb 23 16:42:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ULIqIt.mount: Succeeded. Feb 23 16:43:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00131|connmgr|INFO|br-ex<->unix#275: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:43:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.Zr5CrL.mount: Succeeded. Feb 23 16:43:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.FTHxes.mount: Succeeded. Feb 23 16:43:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00132|connmgr|INFO|br-ex<->unix#282: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:43:30 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.hHUDAd.mount: Succeeded. Feb 23 16:43:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00133|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 16:43:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00134|connmgr|INFO|br-ex<->unix#288: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:43:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00135|connmgr|INFO|br-ex<->unix#292: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:43:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00136|connmgr|INFO|br-ex<->unix#295: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:43:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.bO7XxX.mount: Succeeded. Feb 23 16:43:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:43:47.736173507Z" level=warning msg="Found defunct process with PID 12273 (haproxy)" Feb 23 16:43:54 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.f9U9LA.mount: Succeeded. Feb 23 16:43:55 ip-10-0-136-68 rpm-ostree[18577]: In idle state; will auto-exit in 60 seconds Feb 23 16:43:55 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Succeeded. Feb 23 16:43:55 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Consumed 88ms CPU time Feb 23 16:43:58 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00137|connmgr|INFO|br-ex<->unix#303: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:43:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.RARdHK.mount: Succeeded. Feb 23 16:44:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.dIw5YV.mount: Succeeded. Feb 23 16:44:13 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00138|connmgr|INFO|br-ex<->unix#308: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:44:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00139|connmgr|INFO|br-ex<->unix#311: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:44:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00140|connmgr|INFO|br-ex<->unix#314: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:44:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00141|connmgr|INFO|br-ex<->unix#317: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:44:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00142|connmgr|INFO|br-ex<->unix#320: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:44:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00143|connmgr|INFO|br-int<->unix#2: 46 flow_mods in the 55 s starting 56 s ago (21 adds, 25 deletes) Feb 23 16:44:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00144|connmgr|INFO|br-ex<->unix#328: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:44:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00145|connmgr|INFO|br-ex<->unix#333: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:44:50 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.X7bB2f.mount: Succeeded. Feb 23 16:45:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00146|connmgr|INFO|br-ex<->unix#341: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:45:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.oyJGDA.mount: Succeeded. Feb 23 16:45:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00147|connmgr|INFO|br-ex<->unix#346: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:45:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00148|connmgr|INFO|br-int<->unix#2: 24 flow_mods in the last 57 s (12 adds, 12 deletes) Feb 23 16:45:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00149|connmgr|INFO|br-ex<->unix#354: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:45:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00150|connmgr|INFO|br-ex<->unix#359: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:46:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00151|connmgr|INFO|br-ex<->unix#367: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:46:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00152|connmgr|INFO|br-ex<->unix#372: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:46:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00153|connmgr|INFO|br-int<->unix#2: 27 flow_mods in the 47 s starting 54 s ago (14 adds, 13 deletes) Feb 23 16:46:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00154|connmgr|INFO|br-ex<->unix#380: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:46:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00155|connmgr|INFO|br-ex<->unix#385: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:46:55 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.YQzKGz.mount: Succeeded. Feb 23 16:47:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00156|connmgr|INFO|br-ex<->unix#393: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:47:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00157|connmgr|INFO|br-ex<->unix#398: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:47:25 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.f4yh6M.mount: Succeeded. Feb 23 16:47:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00158|connmgr|INFO|br-int<->unix#2: 8 flow_mods in the 35 s starting 54 s ago (3 adds, 5 deletes) Feb 23 16:47:34 ip-10-0-136-68 systemd[1]: Starting Cleanup of Temporary Directories... Feb 23 16:47:34 ip-10-0-136-68 systemd-tmpfiles[24235]: [/usr/lib/tmpfiles.d/pkg-dbus-daemon.conf:1] Duplicate line for path "/var/lib/dbus", ignoring. Feb 23 16:47:34 ip-10-0-136-68 systemd-tmpfiles[24235]: [/usr/lib/tmpfiles.d/tmp.conf:12] Duplicate line for path "/var/tmp", ignoring. Feb 23 16:47:34 ip-10-0-136-68 systemd-tmpfiles[24235]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Feb 23 16:47:34 ip-10-0-136-68 systemd-tmpfiles[24235]: [/usr/lib/tmpfiles.d/var.conf:19] Duplicate line for path "/var/cache", ignoring. Feb 23 16:47:34 ip-10-0-136-68 systemd-tmpfiles[24235]: [/usr/lib/tmpfiles.d/var.conf:21] Duplicate line for path "/var/lib", ignoring. Feb 23 16:47:34 ip-10-0-136-68 systemd-tmpfiles[24235]: [/usr/lib/tmpfiles.d/var.conf:23] Duplicate line for path "/var/spool", ignoring. Feb 23 16:47:34 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-clean.service: Succeeded. Feb 23 16:47:34 ip-10-0-136-68 systemd[1]: Started Cleanup of Temporary Directories. Feb 23 16:47:34 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-clean.service: Consumed 14ms CPU time Feb 23 16:47:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00159|connmgr|INFO|br-ex<->unix#406: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:47:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:47:45.997083272Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=a29c7312-3fb1-4fdf-ac9e-38ee9972f5b3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:47:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:47:45.997315072Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a29c7312-3fb1-4fdf-ac9e-38ee9972f5b3 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:47:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00160|connmgr|INFO|br-ex<->unix#411: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:48:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00161|connmgr|INFO|br-ex<->unix#420: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:48:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.RwDsMu.mount: Succeeded. Feb 23 16:48:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00162|connmgr|INFO|br-ex<->unix#425: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:48:29 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.uW6Lrs.mount: Succeeded. Feb 23 16:48:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00163|connmgr|INFO|br-int<->unix#2: 20 flow_mods in the 30 s starting 41 s ago (10 adds, 10 deletes) Feb 23 16:48:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00164|connmgr|INFO|br-ex<->unix#433: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:48:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00165|connmgr|INFO|br-ex<->unix#438: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:49:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00166|connmgr|INFO|br-ex<->unix#446: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:49:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00167|connmgr|INFO|br-ex<->unix#451: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:49:25 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.cDu2or.mount: Succeeded. Feb 23 16:49:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00168|connmgr|INFO|br-int<->unix#2: 17 flow_mods in the 36 s starting 56 s ago (9 adds, 8 deletes) Feb 23 16:49:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00169|connmgr|INFO|br-ex<->unix#459: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:49:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 16:49:45.953307 2112 scope.go:115] "RemoveContainer" containerID="7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060" Feb 23 16:49:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:49:45.954017512Z" level=info msg="Removing container: 7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060" id=20b4e12b-e572-4512-aa57-ac136d405b92 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:49:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-479f8b3312536eea69ac4eb80b3e85795654f3a7aa0c94a1ed97c462b6a2653f-merged.mount: Succeeded. Feb 23 16:49:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-479f8b3312536eea69ac4eb80b3e85795654f3a7aa0c94a1ed97c462b6a2653f-merged.mount: Consumed 0 CPU time Feb 23 16:49:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:49:45.994706732Z" level=info msg="Removed container 7a8df06e43bbca45023c3977cdd8e4b9cd9bb1a0081b1372336aa85558881060: openshift-debug-8tq7m/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=20b4e12b-e572-4512-aa57-ac136d405b92 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 16:49:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:49:45.996009928Z" level=info msg="Stopping pod sandbox: 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" id=2bdc9873-1720-4d3e-bb7a-c1be09daee99 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:49:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:49:45.996036988Z" level=info msg="Stopped pod sandbox (already stopped): 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" id=2bdc9873-1720-4d3e-bb7a-c1be09daee99 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 16:49:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:49:45.996308684Z" level=info msg="Removing pod sandbox: 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" id=25716d76-19f5-461d-89fc-8aaf83c0aa17 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:49:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:49:46.004205463Z" level=info msg="Removed pod sandbox: 7193567ee469bad55338ad96c8454ddd91fea579c18d7fb965295087e933ced1" id=25716d76-19f5-461d-89fc-8aaf83c0aa17 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 16:49:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00170|connmgr|INFO|br-ex<->unix#464: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:49:54 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.yAPZ4J.mount: Succeeded. Feb 23 16:50:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00171|connmgr|INFO|br-ex<->unix#472: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:50:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00172|connmgr|INFO|br-ex<->unix#477: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:50:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00173|connmgr|INFO|br-int<->unix#2: 25 flow_mods in the 40 s starting 46 s ago (14 adds, 11 deletes) Feb 23 16:50:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00174|connmgr|INFO|br-ex<->unix#485: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:50:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00175|connmgr|INFO|br-ex<->unix#490: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:51:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00176|connmgr|INFO|br-ex<->unix#498: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:51:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.GA8wMj.mount: Succeeded. Feb 23 16:51:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00177|connmgr|INFO|br-ex<->unix#503: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:51:29 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.sZS26u.mount: Succeeded. Feb 23 16:51:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00178|connmgr|INFO|br-int<->unix#2: 5 flow_mods 51 s ago (2 adds, 3 deletes) Feb 23 16:51:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00179|connmgr|INFO|br-ex<->unix#511: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:51:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00180|connmgr|INFO|br-ex<->unix#516: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:52:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00181|connmgr|INFO|br-ex<->unix#524: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:52:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00182|connmgr|INFO|br-ex<->unix#529: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:52:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00183|connmgr|INFO|br-int<->unix#2: 2 flow_mods in the 14 s starting 54 s ago (1 adds, 1 deletes) Feb 23 16:52:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00184|connmgr|INFO|br-ex<->unix#537: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:52:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.BBW3vP.mount: Succeeded. Feb 23 16:52:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:52:45.999966132Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=03ec6745-3d46-43a2-9119-f15d518895fd name=/runtime.v1.ImageService/ImageStatus Feb 23 16:52:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:52:46.000146030Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=03ec6745-3d46-43a2-9119-f15d518895fd name=/runtime.v1.ImageService/ImageStatus Feb 23 16:52:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00185|connmgr|INFO|br-ex<->unix#542: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:53:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00186|connmgr|INFO|br-ex<->unix#551: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:53:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.6z3nZa.mount: Succeeded. Feb 23 16:53:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00187|connmgr|INFO|br-ex<->unix#556: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:53:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00188|connmgr|INFO|br-int<->unix#2: 15 flow_mods in the 30 s starting 46 s ago (8 adds, 7 deletes) Feb 23 16:53:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00189|connmgr|INFO|br-ex<->unix#564: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:53:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.LZiaNc.mount: Succeeded. Feb 23 16:53:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00190|connmgr|INFO|br-ex<->unix#569: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:53:54 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.wrkQDu.mount: Succeeded. Feb 23 16:53:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.REKIv9.mount: Succeeded. Feb 23 16:54:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00191|connmgr|INFO|br-ex<->unix#577: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:54:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00192|connmgr|INFO|br-ex<->unix#582: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:54:24 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.X31MKB.mount: Succeeded. Feb 23 16:54:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00193|connmgr|INFO|br-int<->unix#2: 1 flow_mods 35 s ago (1 adds) Feb 23 16:54:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00194|connmgr|INFO|br-ex<->unix#590: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:54:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.kVJ0v4.mount: Succeeded. Feb 23 16:54:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.1BqvMG.mount: Succeeded. Feb 23 16:54:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00195|connmgr|INFO|br-ex<->unix#595: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:54:55 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.TisbKQ.mount: Succeeded. Feb 23 16:55:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00196|connmgr|INFO|br-ex<->unix#603: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:55:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00197|connmgr|INFO|br-ex<->unix#608: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:55:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00198|connmgr|INFO|br-int<->unix#2: 7 flow_mods in the 49 s starting 54 s ago (3 adds, 4 deletes) Feb 23 16:55:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00199|connmgr|INFO|br-ex<->unix#616: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:55:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00200|connmgr|INFO|br-ex<->unix#621: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:56:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00201|connmgr|INFO|br-ex<->unix#629: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:56:09 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.e7haaR.mount: Succeeded. Feb 23 16:56:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.IQGK6n.mount: Succeeded. Feb 23 16:56:19 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.Ayv9Q9.mount: Succeeded. Feb 23 16:56:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00202|connmgr|INFO|br-ex<->unix#634: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:56:24 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.iQOeWd.mount: Succeeded. Feb 23 16:56:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00203|connmgr|INFO|br-int<->unix#2: 5 flow_mods in the 21 s starting 55 s ago (2 adds, 3 deletes) Feb 23 16:56:34 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.aiYVWe.mount: Succeeded. Feb 23 16:56:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00204|connmgr|INFO|br-ex<->unix#642: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:56:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.lzURt7.mount: Succeeded. Feb 23 16:56:44 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.GzVocz.mount: Succeeded. Feb 23 16:56:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.lyenik.mount: Succeeded. Feb 23 16:56:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00205|connmgr|INFO|br-ex<->unix#647: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:57:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00206|connmgr|INFO|br-ex<->unix#655: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:57:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00207|connmgr|INFO|br-ex<->unix#658: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:57:04 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ManLhG.mount: Succeeded. Feb 23 16:57:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00208|connmgr|INFO|br-ex<->unix#663: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:57:29 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.5zFu1G.mount: Succeeded. Feb 23 16:57:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00209|connmgr|INFO|br-ex<->unix#671: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:57:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00210|connmgr|INFO|br-int<->unix#2: 52 flow_mods in the last 43 s (27 adds, 25 deletes) Feb 23 16:57:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.YX4HlB.mount: Succeeded. Feb 23 16:57:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00211|connmgr|INFO|br-ex<->unix#676: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:57:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:57:46.003448004Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=04e97458-9ab8-4876-9faf-3b857da14840 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:57:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 16:57:46.003692609Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=04e97458-9ab8-4876-9faf-3b857da14840 name=/runtime.v1.ImageService/ImageStatus Feb 23 16:57:55 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.xG6lbc.mount: Succeeded. Feb 23 16:58:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00212|connmgr|INFO|br-ex<->unix#685: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:58:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00213|connmgr|INFO|br-ex<->unix#690: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:58:19 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.Tz9TWm.mount: Succeeded. Feb 23 16:58:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00214|connmgr|INFO|br-ex<->unix#698: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:58:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00215|connmgr|INFO|br-int<->unix#2: 27 flow_mods in the 51 s starting 59 s ago (13 adds, 14 deletes) Feb 23 16:58:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.BKWYsZ.mount: Succeeded. Feb 23 16:58:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00216|connmgr|INFO|br-ex<->unix#703: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:59:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00217|connmgr|INFO|br-ex<->unix#711: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:59:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00218|connmgr|INFO|br-ex<->unix#716: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:59:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00219|connmgr|INFO|br-ex<->unix#724: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:59:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00220|connmgr|INFO|br-int<->unix#2: 11 flow_mods in the 30 s starting 36 s ago (5 adds, 6 deletes) Feb 23 16:59:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.g4eF17.mount: Succeeded. Feb 23 16:59:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00221|connmgr|INFO|br-ex<->unix#729: 2 flow_mods in the last 0 s (2 adds) Feb 23 16:59:54 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.qib1Xn.mount: Succeeded. Feb 23 17:00:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00222|connmgr|INFO|br-ex<->unix#737: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:00:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00223|connmgr|INFO|br-ex<->unix#742: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:00:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00224|connmgr|INFO|br-ex<->unix#750: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:00:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00225|connmgr|INFO|br-int<->unix#2: 21 flow_mods in the 55 s starting 58 s ago (10 adds, 11 deletes) Feb 23 17:00:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00226|connmgr|INFO|br-ex<->unix#755: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:00:49 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.kaI4Rm.mount: Succeeded. Feb 23 17:01:00 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.LNyvGp.mount: Succeeded. Feb 23 17:01:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00227|connmgr|INFO|br-ex<->unix#763: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:01:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00228|connmgr|INFO|br-ex<->unix#768: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:01:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00229|connmgr|INFO|br-ex<->unix#776: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:01:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00230|connmgr|INFO|br-int<->unix#2: 11 flow_mods in the 29 s starting 35 s ago (7 adds, 4 deletes) Feb 23 17:01:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00231|connmgr|INFO|br-ex<->unix#781: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:02:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00232|connmgr|INFO|br-ex<->unix#789: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:02:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00233|connmgr|INFO|br-ex<->unix#794: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:02:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00234|connmgr|INFO|br-ex<->unix#802: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:02:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00235|connmgr|INFO|br-int<->unix#2: 18 flow_mods in the 47 s starting 57 s ago (8 adds, 10 deletes) Feb 23 17:02:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.HHyN2F.mount: Succeeded. Feb 23 17:02:43 ip-10-0-136-68 NetworkManager[1147]: [1677171763.5274] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 17:02:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00236|connmgr|INFO|br-ex<->unix#807: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:02:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:02:46.006621990Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=5d35ee4c-750f-424a-a9c7-e7e0587007e9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:02:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:02:46.006830083Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5d35ee4c-750f-424a-a9c7-e7e0587007e9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:03:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00237|connmgr|INFO|br-ex<->unix#816: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:03:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.OF8hxW.mount: Succeeded. Feb 23 17:03:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00238|connmgr|INFO|br-ex<->unix#821: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:03:19 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.f3elYK.mount: Succeeded. Feb 23 17:03:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00239|connmgr|INFO|br-ex<->unix#829: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:03:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00240|connmgr|INFO|br-int<->unix#2: 5 flow_mods 19 s ago (3 adds, 2 deletes) Feb 23 17:03:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00241|connmgr|INFO|br-ex<->unix#834: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:04:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00242|connmgr|INFO|br-ex<->unix#842: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:04:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00243|connmgr|INFO|br-ex<->unix#847: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:04:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00244|connmgr|INFO|br-ex<->unix#855: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:04:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00245|connmgr|INFO|br-int<->unix#2: 2 flow_mods in the 13 s starting 58 s ago (1 adds, 1 deletes) Feb 23 17:04:45 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00246|connmgr|INFO|br-ex<->unix#860: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:04:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.TTvs4R.mount: Succeeded. Feb 23 17:05:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00247|connmgr|INFO|br-ex<->unix#868: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:05:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00248|connmgr|INFO|br-ex<->unix#873: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:05:19 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00249|connmgr|INFO|br-ex<->unix#876: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:05:19 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00250|connmgr|INFO|br-ex<->unix#879: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:05:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00251|connmgr|INFO|br-ex<->unix#882: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:05:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00252|connmgr|INFO|br-int<->unix#2: 19 flow_mods in the 8 s starting 14 s ago (13 adds, 6 deletes) Feb 23 17:05:34 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.vmutOx.mount: Succeeded. Feb 23 17:05:34 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.uxbiPb.mount: Succeeded. Feb 23 17:05:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00253|connmgr|INFO|br-ex<->unix#890: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:05:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00254|connmgr|INFO|br-ex<->unix#895: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:06:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00255|connmgr|INFO|br-ex<->unix#903: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:06:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00256|connmgr|INFO|br-ex<->unix#908: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:06:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00257|connmgr|INFO|br-int<->unix#2: 11 flow_mods in the 2 s starting 59 s ago (5 adds, 6 deletes) Feb 23 17:06:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00258|connmgr|INFO|br-ex<->unix#916: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:06:39 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.POZOKt.mount: Succeeded. Feb 23 17:06:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.YoJdsQ.mount: Succeeded. Feb 23 17:06:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00259|connmgr|INFO|br-ex<->unix#921: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:07:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00260|connmgr|INFO|br-ex<->unix#929: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:07:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00261|connmgr|INFO|br-ex<->unix#934: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:07:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00262|connmgr|INFO|br-int<->unix#2: 4 flow_mods in the 30 s starting 50 s ago (2 adds, 2 deletes) Feb 23 17:07:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00263|connmgr|INFO|br-ex<->unix#942: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:07:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:07:46.009260702Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=b73b02e9-09e4-4027-bbe9-b90542ece20c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:07:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:07:46.009476174Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b73b02e9-09e4-4027-bbe9-b90542ece20c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:07:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00264|connmgr|INFO|br-ex<->unix#947: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:08:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00265|connmgr|INFO|br-ex<->unix#956: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:08:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.Z5Jq77.mount: Succeeded. Feb 23 17:08:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00266|connmgr|INFO|br-ex<->unix#961: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:08:34 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.yWmohO.mount: Succeeded. Feb 23 17:08:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00267|connmgr|INFO|br-ex<->unix#969: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:08:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00268|connmgr|INFO|br-ex<->unix#972: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:08:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00269|connmgr|INFO|br-int<->unix#2: 46 flow_mods in the 9 s starting 10 s ago (24 adds, 22 deletes) Feb 23 17:08:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00270|connmgr|INFO|br-ex<->unix#975: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:08:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00271|connmgr|INFO|br-ex<->unix#978: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:08:44 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.JmDy0e.mount: Succeeded. Feb 23 17:08:52 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00272|connmgr|INFO|br-ex<->unix#983: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:08:54 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ahGmY2.mount: Succeeded. Feb 23 17:08:59 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.ZRB1y4.mount: Succeeded. Feb 23 17:09:07 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00273|connmgr|INFO|br-ex<->unix#992: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:09:10 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.CheW4K.mount: Succeeded. Feb 23 17:09:22 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00274|connmgr|INFO|br-ex<->unix#996: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:09:25 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.BuHS2U.mount: Succeeded. Feb 23 17:09:29 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.LTOthm.mount: Succeeded. Feb 23 17:09:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00275|connmgr|INFO|br-int<->unix#2: 35 flow_mods in the 41 s starting 58 s ago (17 adds, 18 deletes) Feb 23 17:09:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00276|connmgr|INFO|br-ex<->unix#1004: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:09:52 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00277|connmgr|INFO|br-ex<->unix#1009: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:10:07 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00278|connmgr|INFO|br-ex<->unix#1017: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:10:14 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.bAd5GT.mount: Succeeded. Feb 23 17:10:22 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00279|connmgr|INFO|br-ex<->unix#1022: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:10:30 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.8r09QK.mount: Succeeded. Feb 23 17:10:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00280|connmgr|INFO|br-ex<->unix#1031: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:10:52 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00281|connmgr|INFO|br-ex<->unix#1035: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:10:54 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.P8XLoT.mount: Succeeded. Feb 23 17:11:07 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00282|connmgr|INFO|br-ex<->unix#1044: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:11:22 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00283|connmgr|INFO|br-ex<->unix#1048: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:11:29 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.TynWqg.mount: Succeeded. Feb 23 17:11:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00284|connmgr|INFO|br-ex<->unix#1057: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:11:52 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00285|connmgr|INFO|br-ex<->unix#1061: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:11:55 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.QZJbhC.mount: Succeeded. Feb 23 17:11:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00286|connmgr|INFO|br-int<->unix#2: 1 flow_mods 10 s ago (1 adds) Feb 23 17:12:07 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00287|connmgr|INFO|br-ex<->unix#1070: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:22 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00288|connmgr|INFO|br-ex<->unix#1074: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00289|connmgr|INFO|br-ex<->unix#1082: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:30 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00290|connmgr|INFO|br-ex<->unix#1085: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00291|connmgr|INFO|br-ex<->unix#1088: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00292|connmgr|INFO|br-ex<->unix#1091: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:34 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.aodTLP.mount: Succeeded. Feb 23 17:12:40 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00293|connmgr|INFO|br-ex<->unix#1095: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:40 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00294|connmgr|INFO|br-ex<->unix#1098: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:40 ip-10-0-136-68 systemd[1]: run-runc-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82-runc.keWpiv.mount: Succeeded. Feb 23 17:12:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00295|connmgr|INFO|br-ex<->unix#1102: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00296|connmgr|INFO|br-ex<->unix#1105: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:12:46.012195625Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=2562aab3-e339-426a-9dd6-b8a23737a6e2 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:12:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:12:46.012395510Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2562aab3-e339-426a-9dd6-b8a23737a6e2 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:12:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00297|connmgr|INFO|br-ex<->unix#1108: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00298|connmgr|INFO|br-ex<->unix#1111: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:12:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00299|connmgr|INFO|br-int<->unix#2: 184 flow_mods in the last 55 s (95 adds, 89 deletes) Feb 23 17:13:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00300|connmgr|INFO|br-ex<->unix#1120: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.083421 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j] Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.083611 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" podUID=3ff1ba18-ee4b-4151-95d3-ad4742635d6b containerName="prometheus-operator-admission-webhook" containerID="cri-o://f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e" gracePeriod=30 Feb 23 17:13:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:08.084588029Z" level=info msg="Stopping container: f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e (timeout: 30s)" id=7a26439f-15af-42f8-af18-69fd4e3d221c name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: crio-f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e.scope: Succeeded. Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: crio-f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e.scope: Consumed 2.949s CPU time Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: crio-conmon-f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e.scope: Succeeded. Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: crio-conmon-f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e.scope: Consumed 25ms CPU time Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-213690484b5d32543644f644f08e742245ff457b21cf2a4be74363e11ab1e531-merged.mount: Succeeded. Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-213690484b5d32543644f644f08e742245ff457b21cf2a4be74363e11ab1e531-merged.mount: Consumed 0 CPU time Feb 23 17:13:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:08.301070832Z" level=info msg="Stopped container f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j/prometheus-operator-admission-webhook" id=7a26439f-15af-42f8-af18-69fd4e3d221c name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:08.301788947Z" level=info msg="Stopping pod sandbox: d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66" id=623ea2e1-036c-4376-b59d-0aac194f9843 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:08.302028153Z" level=info msg="Got pod network &{Name:prometheus-operator-admission-webhook-6854f48657-f548j Namespace:openshift-monitoring ID:d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66 UID:3ff1ba18-ee4b-4151-95d3-ad4742635d6b NetNS:/var/run/netns/4b34e3bf-9614-43bb-b8a4-15840ea1212a Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:08.302138023Z" level=info msg="Deleting pod openshift-monitoring_prometheus-operator-admission-webhook-6854f48657-f548j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.318634 2112 generic.go:296] "Generic (PLEG): container finished" podID=3ff1ba18-ee4b-4151-95d3-ad4742635d6b containerID="f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e" exitCode=0 Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.318717 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" event=&{ID:3ff1ba18-ee4b-4151-95d3-ad4742635d6b Type:ContainerDied Data:f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e} Feb 23 17:13:08 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00301|bridge|INFO|bridge br-int: deleted interface d57b42d0ba3aca4 on port 13 Feb 23 17:13:08 ip-10-0-136-68 kernel: device d57b42d0ba3aca4 left promiscuous mode Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.625847 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-ingress/router-default-c776d6877-hc4dc] Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.625914 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:08.626202 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7db4b92a-156c-43bb-8301-4987d2527f68" containerName="container-00" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.626277 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db4b92a-156c-43bb-8301-4987d2527f68" containerName="container-00" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.626448 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="7db4b92a-156c-43bb-8301-4987d2527f68" containerName="container-00" Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod4b453ab9_1ce4_45a1_b69d_c289991008f1.slice. Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.669579 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-ingress/router-default-c776d6877-hc4dc] Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.754543 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpxwd\" (UniqueName: \"kubernetes.io/projected/4b453ab9-1ce4-45a1-b69d-c289991008f1-kube-api-access-wpxwd\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.754613 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b453ab9-1ce4-45a1-b69d-c289991008f1-service-ca-bundle\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.754641 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-default-certificate\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.754687 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-metrics-certs\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.754718 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-stats-auth\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.855906 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-wpxwd\" (UniqueName: \"kubernetes.io/projected/4b453ab9-1ce4-45a1-b69d-c289991008f1-kube-api-access-wpxwd\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.855968 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b453ab9-1ce4-45a1-b69d-c289991008f1-service-ca-bundle\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.855995 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-default-certificate\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.856022 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-metrics-certs\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.856051 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-stats-auth\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.857097 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b453ab9-1ce4-45a1-b69d-c289991008f1-service-ca-bundle\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.858845 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-stats-auth\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.860378 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-default-certificate\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.871946 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-metrics-certs\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:08.879045 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpxwd\" (UniqueName: \"kubernetes.io/projected/4b453ab9-1ce4-45a1-b69d-c289991008f1-kube-api-access-wpxwd\") pod \"router-default-c776d6877-hc4dc\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:08 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:08Z [verbose] Del: openshift-monitoring:prometheus-operator-admission-webhook-6854f48657-f548j:3ff1ba18-ee4b-4151-95d3-ad4742635d6b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:13:08 ip-10-0-136-68 crio[2062]: I0223 17:13:08.441206 54959 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e49ca4532c201adcb11106f197d6b856015a1751442a58e35434f175319b906a-merged.mount: Succeeded. Feb 23 17:13:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e49ca4532c201adcb11106f197d6b856015a1751442a58e35434f175319b906a-merged.mount: Consumed 0 CPU time Feb 23 17:13:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:08.950793645Z" level=info msg="Stopped pod sandbox: d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66" id=623ea2e1-036c-4376-b59d-0aac194f9843 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.063057 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/3ff1ba18-ee4b-4151-95d3-ad4742635d6b-tls-certificates\") pod \"3ff1ba18-ee4b-4151-95d3-ad4742635d6b\" (UID: \"3ff1ba18-ee4b-4151-95d3-ad4742635d6b\") " Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.071041 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff1ba18-ee4b-4151-95d3-ad4742635d6b-tls-certificates" (OuterVolumeSpecName: "tls-certificates") pod "3ff1ba18-ee4b-4151-95d3-ad4742635d6b" (UID: "3ff1ba18-ee4b-4151-95d3-ad4742635d6b"). InnerVolumeSpecName "tls-certificates". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.134168 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.134716737Z" level=info msg="Running pod sandbox: openshift-ingress/router-default-c776d6877-hc4dc/POD" id=a930acc5-8403-4310-8b19-d0525fe35b59 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.134766162Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.153116513Z" level=info msg="Got pod network &{Name:router-default-c776d6877-hc4dc Namespace:openshift-ingress ID:13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f UID:4b453ab9-1ce4-45a1-b69d-c289991008f1 NetNS:/var/run/netns/78fd1792-7a62-4ba0-a338-116add7a24cd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.153140093Z" level=info msg="Adding pod openshift-ingress_router-default-c776d6877-hc4dc to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.164174 2112 reconciler.go:399] "Volume detached for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/3ff1ba18-ee4b-4151-95d3-ad4742635d6b-tls-certificates\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-netns-4b34e3bf\x2d9614\x2d43bb\x2db8a4\x2d15840ea1212a.mount: Succeeded. Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-netns-4b34e3bf\x2d9614\x2d43bb\x2db8a4\x2d15840ea1212a.mount: Consumed 0 CPU time Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-ipcns-4b34e3bf\x2d9614\x2d43bb\x2db8a4\x2d15840ea1212a.mount: Succeeded. Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-ipcns-4b34e3bf\x2d9614\x2d43bb\x2db8a4\x2d15840ea1212a.mount: Consumed 0 CPU time Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-utsns-4b34e3bf\x2d9614\x2d43bb\x2db8a4\x2d15840ea1212a.mount: Succeeded. Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-utsns-4b34e3bf\x2d9614\x2d43bb\x2db8a4\x2d15840ea1212a.mount: Consumed 0 CPU time Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66-userdata-shm.mount: Succeeded. Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ff1ba18\x2dee4b\x2d4151\x2d95d3\x2dad4742635d6b-volumes-kubernetes.io\x7esecret-tls\x2dcertificates.mount: Succeeded. Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3ff1ba18\x2dee4b\x2d4151\x2d95d3\x2dad4742635d6b-volumes-kubernetes.io\x7esecret-tls\x2dcertificates.mount: Consumed 0 CPU time Feb 23 17:13:09 ip-10-0-136-68 systemd-udevd[55018]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:13:09 ip-10-0-136-68 systemd-udevd[55018]: Could not generate persistent MAC address for 13573c504bb25b1: No such file or directory Feb 23 17:13:09 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 13573c504bb25b1: link is not ready Feb 23 17:13:09 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 13573c504bb25b1: link becomes ready Feb 23 17:13:09 ip-10-0-136-68 NetworkManager[1147]: [1677172389.3148] device (13573c504bb25b1): carrier: link connected Feb 23 17:13:09 ip-10-0-136-68 NetworkManager[1147]: [1677172389.3156] manager: (13573c504bb25b1): new Veth device (/org/freedesktop/NetworkManager/Devices/50) Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.325716 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" event=&{ID:3ff1ba18-ee4b-4151-95d3-ad4742635d6b Type:ContainerDied Data:d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66} Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.325934 2112 scope.go:115] "RemoveContainer" containerID="f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e" Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.330091497Z" level=info msg="Removing container: f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e" id=9533edab-fcc3-4b40-bed6-2a8cf1647cae name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod3ff1ba18_ee4b_4151_95d3_ad4742635d6b.slice. Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod3ff1ba18_ee4b_4151_95d3_ad4742635d6b.slice: Consumed 2.974s CPU time Feb 23 17:13:09 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00302|bridge|INFO|bridge br-int: added interface 13573c504bb25b1 on port 20 Feb 23 17:13:09 ip-10-0-136-68 kernel: device 13573c504bb25b1 entered promiscuous mode Feb 23 17:13:09 ip-10-0-136-68 NetworkManager[1147]: [1677172389.3421] manager: (13573c504bb25b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51) Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.360725 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j] Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.372344 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j] Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.384545607Z" level=info msg="Removed container f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e: openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j/prometheus-operator-admission-webhook" id=9533edab-fcc3-4b40-bed6-2a8cf1647cae name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.436874 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-ingress/router-default-c776d6877-hc4dc] Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: I0223 17:13:09.291056 55007 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:09Z [verbose] Add: openshift-ingress:router-default-c776d6877-hc4dc:4b453ab9-1ce4-45a1-b69d-c289991008f1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"13573c504bb25b1","mac":"12:fe:a1:7b:24:97"},{"name":"eth0","mac":"0a:58:0a:81:02:16","sandbox":"/var/run/netns/78fd1792-7a62-4ba0-a338-116add7a24cd"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.22/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: I0223 17:13:09.419435 55000 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress", Name:"router-default-c776d6877-hc4dc", UID:"4b453ab9-1ce4-45a1-b69d-c289991008f1", APIVersion:"v1", ResourceVersion:"66868", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.22/23] from ovn-kubernetes Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.439113019Z" level=info msg="Got pod network &{Name:router-default-c776d6877-hc4dc Namespace:openshift-ingress ID:13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f UID:4b453ab9-1ce4-45a1-b69d-c289991008f1 NetNS:/var/run/netns/78fd1792-7a62-4ba0-a338-116add7a24cd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.439264897Z" level=info msg="Checking pod openshift-ingress_router-default-c776d6877-hc4dc for CNI network multus-cni-network (type=multus)" Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:09.443140 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b453ab9_1ce4_45a1_b69d_c289991008f1.slice/crio-13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f.scope WatchSource:0}: Error finding container 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f: Status 404 returned error can't find the container with id 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.445827220Z" level=info msg="Ran pod sandbox 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f with infra container: openshift-ingress/router-default-c776d6877-hc4dc/POD" id=a930acc5-8403-4310-8b19-d0525fe35b59 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.446627491Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7eed22e436328cdfddd00e0f8bf78e6c175e69aed39763705ff47935204ca03c" id=cc8bc011-3797-4a54-a1bb-47c04003e3d5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.446881891Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7eed22e436328cdfddd00e0f8bf78e6c175e69aed39763705ff47935204ca03c not found" id=cc8bc011-3797-4a54-a1bb-47c04003e3d5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:09.447271 2112 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.447735833Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7eed22e436328cdfddd00e0f8bf78e6c175e69aed39763705ff47935204ca03c" id=8bc290c0-8ff8-44cd-8bcd-c94c9ac67231 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:09.567141419Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7eed22e436328cdfddd00e0f8bf78e6c175e69aed39763705ff47935204ca03c\"" Feb 23 17:13:09 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.xZIsZb.mount: Succeeded. Feb 23 17:13:10 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:10.118077 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-6854f48657-f548j" podUID=3ff1ba18-ee4b-4151-95d3-ad4742635d6b containerName="prometheus-operator-admission-webhook" containerID="cri-o://f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e" gracePeriod=1 Feb 23 17:13:10 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:10.118206016Z" level=info msg="Stopping container: f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e (timeout: 1s)" id=41ab8c68-fb19-4299-a729-ddf8c3e31218 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:10 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:10.118704 2112 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e\": container with ID starting with f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e not found: ID does not exist" containerID="f80f57db9253ff43c8cecdea2d1d1becccb3ce809fa997d7d557fc2364d0133e" Feb 23 17:13:10 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:10.119056235Z" level=info msg="Stopping pod sandbox: d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66" id=6950f5ee-515a-457b-a6d0-4aaa03873b96 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:10 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:10.119087686Z" level=info msg="Stopped pod sandbox (already stopped): d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66" id=6950f5ee-515a-457b-a6d0-4aaa03873b96 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:10 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:10.119768 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3ff1ba18-ee4b-4151-95d3-ad4742635d6b path="/var/lib/kubelet/pods/3ff1ba18-ee4b-4151-95d3-ad4742635d6b/volumes" Feb 23 17:13:10 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:10.324811 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-c776d6877-hc4dc" event=&{ID:4b453ab9-1ce4-45a1-b69d-c289991008f1 Type:ContainerStarted Data:13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f} Feb 23 17:13:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00303|connmgr|INFO|br-ex<->unix#1125: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:11 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:11.201231945Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7eed22e436328cdfddd00e0f8bf78e6c175e69aed39763705ff47935204ca03c\"" Feb 23 17:13:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00304|connmgr|INFO|br-ex<->unix#1128: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00305|connmgr|INFO|br-ex<->unix#1131: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00306|connmgr|INFO|br-ex<->unix#1134: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:12 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00307|connmgr|INFO|br-ex<->unix#1137: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:12 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00308|connmgr|INFO|br-ex<->unix#1140: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:18.393356538Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7eed22e436328cdfddd00e0f8bf78e6c175e69aed39763705ff47935204ca03c" id=8bc290c0-8ff8-44cd-8bcd-c94c9ac67231 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:18.394069013Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7eed22e436328cdfddd00e0f8bf78e6c175e69aed39763705ff47935204ca03c" id=88860b95-42d1-44c5-a931-52de1003c054 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:18.395433812Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4f8cc3c726e59fa68c9adacbc98bc451091119442ddf5f8d968141a0e54977e2,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7eed22e436328cdfddd00e0f8bf78e6c175e69aed39763705ff47935204ca03c],Size_:430322930,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=88860b95-42d1-44c5-a931-52de1003c054 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:18.396267171Z" level=info msg="Creating container: openshift-ingress/router-default-c776d6877-hc4dc/router" id=03a12f42-9cce-4ec4-949c-90e0f649ff10 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:18.396347843Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:18 ip-10-0-136-68 systemd[1]: Started crio-conmon-cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73.scope. Feb 23 17:13:18 ip-10-0-136-68 systemd[1]: run-runc-cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73-runc.0rvFWQ.mount: Succeeded. Feb 23 17:13:18 ip-10-0-136-68 systemd[1]: Started libcontainer container cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73. Feb 23 17:13:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:18.539803539Z" level=info msg="Created container cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73: openshift-ingress/router-default-c776d6877-hc4dc/router" id=03a12f42-9cce-4ec4-949c-90e0f649ff10 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:18.540218149Z" level=info msg="Starting container: cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73" id=46248204-acec-4daf-b322-8f0fd4db3853 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:18.559121671Z" level=info msg="Started container" PID=55288 containerID=cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73 description=openshift-ingress/router-default-c776d6877-hc4dc/router id=46248204-acec-4daf-b322-8f0fd4db3853 name=/runtime.v1.RuntimeService/StartContainer sandboxID=13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f Feb 23 17:13:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:19.383761 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-c776d6877-hc4dc" event=&{ID:4b453ab9-1ce4-45a1-b69d-c289991008f1 Type:ContainerStarted Data:cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73} Feb 23 17:13:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:20.134536 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:20.137055 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:20.385693 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:20.386925 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.012252 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-k8s-1] Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.012522 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerName="prometheus" containerID="cri-o://ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82" gracePeriod=600 Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.012813909Z" level=info msg="Stopping container: ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82 (timeout: 600s)" id=945064e5-5c14-4be7-9108-0c5cd42669bd name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.013199381Z" level=info msg="Stopping container: ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2 (timeout: 600s)" id=15a3d199-d3b3-4a3b-bda8-10214982c4ac name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.013402951Z" level=info msg="Stopping container: 542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19 (timeout: 600s)" id=7229c968-7e09-4aa8-a444-270e25a10ffe name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.013404404Z" level=info msg="Stopping container: 6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998 (timeout: 600s)" id=c3fd01bf-f6cc-4b73-8efa-b42242a7c519 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.013418928Z" level=info msg="Stopping container: b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915 (timeout: 600s)" id=0e26c28a-b789-4259-b0dc-5c183322a31e name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.012836 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerName="kube-rbac-proxy-thanos" containerID="cri-o://b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915" gracePeriod=600 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.012935 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerName="kube-rbac-proxy" containerID="cri-o://542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19" gracePeriod=600 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.012877 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerName="thanos-sidecar" containerID="cri-o://6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998" gracePeriod=600 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.012894 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerName="prometheus-proxy" containerID="cri-o://ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2" gracePeriod=600 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.012903 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerName="config-reloader" containerID="cri-o://5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98" gracePeriod=600 Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.013436735Z" level=info msg="Stopping container: 5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98 (timeout: 600s)" id=47350049-561f-455e-bd4e-1ebc1732ce03 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.041523 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.042086 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=457a2ca9-5414-414b-8731-42d2430a3275 containerName="config-reloader" containerID="cri-o://c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73" gracePeriod=120 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.042119 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=457a2ca9-5414-414b-8731-42d2430a3275 containerName="prom-label-proxy" containerID="cri-o://f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173" gracePeriod=120 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.042267 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=457a2ca9-5414-414b-8731-42d2430a3275 containerName="alertmanager" containerID="cri-o://3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955" gracePeriod=120 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.042357 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=457a2ca9-5414-414b-8731-42d2430a3275 containerName="kube-rbac-proxy" containerID="cri-o://9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9" gracePeriod=120 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.042431 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=457a2ca9-5414-414b-8731-42d2430a3275 containerName="kube-rbac-proxy-metric" containerID="cri-o://453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02" gracePeriod=120 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.042461 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=457a2ca9-5414-414b-8731-42d2430a3275 containerName="alertmanager-proxy" containerID="cri-o://b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428" gracePeriod=120 Feb 23 17:13:22 ip-10-0-136-68 conmon[6619]: conmon 5dd7cb88e4cf279c2edd : container 6651 exited with status 2 Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.044444366Z" level=info msg="Stopping container: f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173 (timeout: 120s)" id=57aa0726-c035-4ef6-8ebe-afb2091e8920 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.044481357Z" level=info msg="Stopping container: 453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02 (timeout: 120s)" id=4fd110de-4583-4fd7-b54e-727c15ec4edd name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.044469757Z" level=info msg="Stopping container: b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428 (timeout: 120s)" id=e1740649-a621-4402-b629-01548fd19f43 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.044455587Z" level=info msg="Stopping container: 3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955 (timeout: 120s)" id=e7daa67d-580c-4f55-a58b-cd6c4301d05a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.044461851Z" level=info msg="Stopping container: c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73 (timeout: 120s)" id=bb72ecd5-053a-4c8f-8183-5db40f0dc142 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.044444569Z" level=info msg="Stopping container: 9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9 (timeout: 120s)" id=4a178b08-e6e9-487f-a519-48cd2cc43310 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98.scope: Consumed 305ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98.scope: Consumed 26ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998.scope: Consumed 29ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998.scope: Consumed 2.870s CPU time Feb 23 17:13:22 ip-10-0-136-68 conmon[6229]: conmon c356477d6c838eeac943 : container 6251 exited with status 2 Feb 23 17:13:22 ip-10-0-136-68 conmon[6803]: conmon ec1befd05c73d0a470f8 : container 6817 exited with status 2 Feb 23 17:13:22 ip-10-0-136-68 conmon[6287]: conmon b01a99b1da48a49deccb : container 6299 exited with status 2 Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428.scope: Consumed 3.056s CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73.scope: Consumed 96ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2.scope: Consumed 5.349s CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2.scope: Consumed 24ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173.scope: Consumed 31ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9.scope: Consumed 23ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73.scope: Consumed 24ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173.scope: Consumed 28ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428.scope: Consumed 23ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9.scope: Consumed 107ms CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-269b4f81fa6fd43ef1cf272cebf995203502d0b090f8b607a37def8aec50f7c2-merged.mount: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-269b4f81fa6fd43ef1cf272cebf995203502d0b090f8b607a37def8aec50f7c2-merged.mount: Consumed 0 CPU time Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.246262953Z" level=info msg="Stopped container 6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998: openshift-monitoring/prometheus-k8s-1/thanos-sidecar" id=c3fd01bf-f6cc-4b73-8efa-b42242a7c519 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3e50272a2499964a23e0c716e1852ff12768f59f0faa9f104772308cd33c5d24-merged.mount: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3e50272a2499964a23e0c716e1852ff12768f59f0faa9f104772308cd33c5d24-merged.mount: Consumed 0 CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82.scope: Consumed 3min 8.193s CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82.scope: Consumed 33ms CPU time Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.267143624Z" level=info msg="Stopped container b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=e1740649-a621-4402-b629-01548fd19f43 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f974979f3babb0a195d2092516b8fc26b6099a35aed7a419e5087973ac1b5689-merged.mount: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f974979f3babb0a195d2092516b8fc26b6099a35aed7a419e5087973ac1b5689-merged.mount: Consumed 0 CPU time Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.280713988Z" level=info msg="Stopped container f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=57aa0726-c035-4ef6-8ebe-afb2091e8920 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c37176488ced8af65bd4005cdd7b3bc946c342c0f7b3481ca0f0c434340efab3-merged.mount: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c37176488ced8af65bd4005cdd7b3bc946c342c0f7b3481ca0f0c434340efab3-merged.mount: Consumed 0 CPU time Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.295251304Z" level=info msg="Stopped container 5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98: openshift-monitoring/prometheus-k8s-1/config-reloader" id=47350049-561f-455e-bd4e-1ebc1732ce03 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f128e93099e2806d0a832db36e23c2f317b18fc80a976c651f8283792d0d0815-merged.mount: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f128e93099e2806d0a832db36e23c2f317b18fc80a976c651f8283792d0d0815-merged.mount: Consumed 0 CPU time Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.308481722Z" level=info msg="Stopped container c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73: openshift-monitoring/alertmanager-main-1/config-reloader" id=bb72ecd5-053a-4c8f-8183-5db40f0dc142 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.318804101Z" level=info msg="Stopped container 9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=4a178b08-e6e9-487f-a519-48cd2cc43310 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.343110448Z" level=info msg="Stopped container ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2: openshift-monitoring/prometheus-k8s-1/prometheus-proxy" id=15a3d199-d3b3-4a3b-bda8-10214982c4ac name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.380200681Z" level=info msg="Stopped container ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82: openshift-monitoring/prometheus-k8s-1/prometheus" id=945064e5-5c14-4be7-9108-0c5cd42669bd name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391162 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager-proxy/0.log" Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391502 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/config-reloader/0.log" Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391860 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager/0.log" Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391899 2112 generic.go:296] "Generic (PLEG): container finished" podID=457a2ca9-5414-414b-8731-42d2430a3275 containerID="f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173" exitCode=0 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391910 2112 generic.go:296] "Generic (PLEG): container finished" podID=457a2ca9-5414-414b-8731-42d2430a3275 containerID="9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9" exitCode=0 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391918 2112 generic.go:296] "Generic (PLEG): container finished" podID=457a2ca9-5414-414b-8731-42d2430a3275 containerID="b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428" exitCode=2 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391926 2112 generic.go:296] "Generic (PLEG): container finished" podID=457a2ca9-5414-414b-8731-42d2430a3275 containerID="c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73" exitCode=2 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391974 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerDied Data:f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173} Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.391996 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerDied Data:9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9} Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.392010 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerDied Data:b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428} Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.392024 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerDied Data:c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73} Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.393341 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-1_de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/prometheus-proxy/0.log" Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.393831 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-1_de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/config-reloader/0.log" Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.394286 2112 generic.go:296] "Generic (PLEG): container finished" podID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerID="ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2" exitCode=2 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.394304 2112 generic.go:296] "Generic (PLEG): container finished" podID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerID="6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998" exitCode=0 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.394313 2112 generic.go:296] "Generic (PLEG): container finished" podID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerID="5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98" exitCode=2 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.394315 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerDied Data:ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2} Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.394325 2112 generic.go:296] "Generic (PLEG): container finished" podID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerID="ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82" exitCode=0 Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.394343 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerDied Data:6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998} Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.394358 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerDied Data:5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98} Feb 23 17:13:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:22.394371 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerDied Data:ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82} Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955.scope: Consumed 3.367s CPU time Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955.scope: Succeeded. Feb 23 17:13:22 ip-10-0-136-68 systemd[1]: crio-conmon-3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955.scope: Consumed 25ms CPU time Feb 23 17:13:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:22.730807697Z" level=info msg="Stopped container 3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955: openshift-monitoring/alertmanager-main-1/alertmanager" id=e7daa67d-580c-4f55-a58b-cd6c4301d05a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19.scope: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19.scope: Consumed 721ms CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-conmon-542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19.scope: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-conmon-542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19.scope: Consumed 26ms CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915.scope: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915.scope: Consumed 648ms CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-conmon-b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915.scope: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-conmon-b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915.scope: Consumed 24ms CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02.scope: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02.scope: Consumed 663ms CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-conmon-453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02.scope: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: crio-conmon-453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02.scope: Consumed 30ms CPU time Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.184449479Z" level=info msg="Stopped container 542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19: openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy" id=7229c968-7e09-4aa8-a444-270e25a10ffe name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.192553254Z" level=info msg="Stopped container b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915: openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-thanos" id=0e26c28a-b789-4259-b0dc-5c183322a31e name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.192863602Z" level=info msg="Stopping pod sandbox: d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f" id=5dfb5819-7d8e-4a6a-b0f5-1b97b17c8125 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.193044681Z" level=info msg="Got pod network &{Name:prometheus-k8s-1 Namespace:openshift-monitoring ID:d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f UID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c NetNS:/var/run/netns/a6d016c1-2d14-4c02-896b-9581b100a834 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.193150667Z" level=info msg="Deleting pod openshift-monitoring_prometheus-k8s-1 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-bc233679c256c9fa45b984df20ef423f87d4ec223dea4a804bd9883e86634550-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-bc233679c256c9fa45b984df20ef423f87d4ec223dea4a804bd9883e86634550-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-fdedb04cd08888759b59d44788fe1e814e26bc88d3bfba56d6125f2ce5fd0bd8-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-fdedb04cd08888759b59d44788fe1e814e26bc88d3bfba56d6125f2ce5fd0bd8-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-14666a7ab11504a304666937f40d51d50a021762790068b86e9b684d5e6ed5ad-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-14666a7ab11504a304666937f40d51d50a021762790068b86e9b684d5e6ed5ad-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3b02f4704edc6242d07f4f15269e82c3e757b34634352367ee41bd05b723984b-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3b02f4704edc6242d07f4f15269e82c3e757b34634352367ee41bd05b723984b-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-79de7fbcee973f0a365f39322551601dcec04d865720e3f1666a69164581e4e1-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-79de7fbcee973f0a365f39322551601dcec04d865720e3f1666a69164581e4e1-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ec8a9b4d630d5f3f01a2e79bae6d77dafcda3b24307adddcd95822efe7f64247-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ec8a9b4d630d5f3f01a2e79bae6d77dafcda3b24307adddcd95822efe7f64247-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-11210143666bd4403b317ad3fdff39a9e7c377d595f8eb9d298ba6dd7d5c9fc1-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-11210143666bd4403b317ad3fdff39a9e7c377d595f8eb9d298ba6dd7d5c9fc1-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.246688755Z" level=info msg="Stopped container 453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=4fd110de-4583-4fd7-b54e-727c15ec4edd name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.247107343Z" level=info msg="Stopping pod sandbox: 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608" id=b0f4d117-d2f2-4ef4-be89-9507e6f135de name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.247331391Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608 UID:457a2ca9-5414-414b-8731-42d2430a3275 NetNS:/var/run/netns/13a1e099-392b-4597-85ad-d1b6663e05ea Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.247481777Z" level=info msg="Deleting pod openshift-monitoring_alertmanager-main-1 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:23 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00309|bridge|INFO|bridge br-int: deleted interface d1ad2e51d68aca0 on port 19 Feb 23 17:13:23 ip-10-0-136-68 kernel: device d1ad2e51d68aca0 left promiscuous mode Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.400639 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager-proxy/0.log" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.401309 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/config-reloader/0.log" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.401650 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager/0.log" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.401728 2112 generic.go:296] "Generic (PLEG): container finished" podID=457a2ca9-5414-414b-8731-42d2430a3275 containerID="3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955" exitCode=0 Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.401745 2112 generic.go:296] "Generic (PLEG): container finished" podID=457a2ca9-5414-414b-8731-42d2430a3275 containerID="453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02" exitCode=0 Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.401803 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerDied Data:3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955} Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.401829 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerDied Data:453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02} Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.401849 2112 scope.go:115] "RemoveContainer" containerID="5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98" Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.402756259Z" level=info msg="Removing container: 5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98" id=90c8f7de-58b8-4f74-974d-a123f695a187 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.403598 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-1_de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/prometheus-proxy/0.log" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.404246 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-1_de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/config-reloader/0.log" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.405022 2112 generic.go:296] "Generic (PLEG): container finished" podID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerID="b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915" exitCode=0 Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.405042 2112 generic.go:296] "Generic (PLEG): container finished" podID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c containerID="542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19" exitCode=0 Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.405194 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerDied Data:b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915} Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.405220 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerDied Data:542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19} Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-993612f42edacc79d4ff44a12bc2b496ef66454d5f00f6c389854e6ce14e5da5-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-993612f42edacc79d4ff44a12bc2b496ef66454d5f00f6c389854e6ce14e5da5-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.481898585Z" level=info msg="Removed container 5326dfbd58af8cae61b1d5891c2250d8848c9526679bdc562b03ccc7a2ef3f98: openshift-monitoring/alertmanager-main-1/alertmanager" id=90c8f7de-58b8-4f74-974d-a123f695a187 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:23 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00310|bridge|INFO|bridge br-int: deleted interface 2dd14dc79891cf2 on port 18 Feb 23 17:13:23 ip-10-0-136-68 kernel: device 2dd14dc79891cf2 left promiscuous mode Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:23Z [verbose] Del: openshift-monitoring:prometheus-k8s-1:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: I0223 17:13:23.360811 55831 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-57b0a5e8955761ee4bbc568f793cef2ff18a5b0bbd82f005ee0e1cb67648d1e8-merged.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-57b0a5e8955761ee4bbc568f793cef2ff18a5b0bbd82f005ee0e1cb67648d1e8-merged.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: run-utsns-a6d016c1\x2d2d14\x2d4c02\x2d896b\x2d9581b100a834.mount: Succeeded. Feb 23 17:13:23 ip-10-0-136-68 systemd[1]: run-utsns-a6d016c1\x2d2d14\x2d4c02\x2d896b\x2d9581b100a834.mount: Consumed 0 CPU time Feb 23 17:13:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:23.863833390Z" level=info msg="Stopped pod sandbox: d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f" id=5dfb5819-7d8e-4a6a-b0f5-1b97b17c8125 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.871594 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-1_de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/prometheus-proxy/0.log" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.872297 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-1_de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/config-reloader/0.log" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991478 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-serving-certs-ca-bundle\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991528 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-metrics-client-certs\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991555 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-tls-assets\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991577 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991605 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-metrics-client-ca\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991631 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-trusted-ca-bundle\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991674 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-db\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991959 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-proxy\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.991998 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-metrics-client-ca\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992027 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-web-config\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992058 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-rbac-proxy\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992092 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config-out\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992123 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992151 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-tls\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992182 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-etcd-client-certs\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992212 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-grpc-tls\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992242 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5r25\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-kube-api-access-s5r25\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:23.992230 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes/kubernetes.io~configmap/prometheus-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992274 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-kubelet-serving-ca-bundle\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992302 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-rulefiles-0\") pod \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\" (UID: \"de160b09-a82e-4c1c-855b-4dfb3b3cbd7c\") " Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.992549 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:23.992550 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes/kubernetes.io~configmap/prometheus-k8s-rulefiles-0: clearQuota called, but quotas disabled Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:23.993340 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.993825 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:23.994908 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes/kubernetes.io~empty-dir/prometheus-k8s-db: clearQuota called, but quotas disabled Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:23.996306 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes/kubernetes.io~configmap/configmap-serving-certs-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:23.997364 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes/kubernetes.io~configmap/configmap-kubelet-serving-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.997492 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:23.998273 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes/kubernetes.io~empty-dir/config-out: clearQuota called, but quotas disabled Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.998917 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config-out" (OuterVolumeSpecName: "config-out") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:13:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:23.998969 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes/kubernetes.io~configmap/configmap-metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.999915 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:23.999965 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.001054 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.001265 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-etcd-client-certs" (OuterVolumeSpecName: "secret-kube-etcd-client-certs") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "secret-kube-etcd-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.002867 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.004760 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-proxy" (OuterVolumeSpecName: "secret-prometheus-k8s-proxy") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "secret-prometheus-k8s-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.027175 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.030397 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config" (OuterVolumeSpecName: "config") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.031309 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.033122 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.037555 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.038967 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.040508 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-kube-api-access-s5r25" (OuterVolumeSpecName: "kube-api-access-s5r25") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "kube-api-access-s5r25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.051202 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-web-config" (OuterVolumeSpecName: "web-config") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.056181 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" (UID: "de160b09-a82e-4c1c-855b-4dfb3b3cbd7c"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092614 2112 reconciler.go:399] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092639 2112 reconciler.go:399] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-web-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092654 2112 reconciler.go:399] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092692 2112 reconciler.go:399] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config-out\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092710 2112 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092727 2112 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092742 2112 reconciler.go:399] "Volume detached for volume \"secret-kube-etcd-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-kube-etcd-client-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092756 2112 reconciler.go:399] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-grpc-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092775 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-s5r25\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-kube-api-access-s5r25\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092793 2112 reconciler.go:399] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-kubelet-serving-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092811 2112 reconciler.go:399] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-rulefiles-0\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092828 2112 reconciler.go:399] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-configmap-serving-certs-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092844 2112 reconciler.go:399] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-metrics-client-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092860 2112 reconciler.go:399] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-tls-assets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092878 2112 reconciler.go:399] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092892 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092908 2112 reconciler.go:399] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092925 2112 reconciler.go:399] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-prometheus-k8s-db\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.092943 2112 reconciler.go:399] "Volume detached for volume \"secret-prometheus-k8s-proxy\" (UniqueName: \"kubernetes.io/secret/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c-secret-prometheus-k8s-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podde160b09_a82e_4c1c_855b_4dfb3b3cbd7c.slice. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: kubepods-burstable-podde160b09_a82e_4c1c_855b_4dfb3b3cbd7c.slice: Consumed 3min 18.312s CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volume\x2dsubpaths-web\x2dconfig-prometheus-5.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volume\x2dsubpaths-web\x2dconfig-prometheus-5.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-netns-a6d016c1\x2d2d14\x2d4c02\x2d896b\x2d9581b100a834.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-netns-a6d016c1\x2d2d14\x2d4c02\x2d896b\x2d9581b100a834.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-ipcns-a6d016c1\x2d2d14\x2d4c02\x2d896b\x2d9581b100a834.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-ipcns-a6d016c1\x2d2d14\x2d4c02\x2d896b\x2d9581b100a834.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f-userdata-shm.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds5r25.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds5r25.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2detcd\x2dclient\x2dcerts.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2detcd\x2dclient\x2dcerts.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dthanos\x2dsidecar\x2dtls.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dthanos\x2dsidecar\x2dtls.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dmetrics\x2dclient\x2dcerts.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dmetrics\x2dclient\x2dcerts.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-config.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-config.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dtls.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dtls.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dkube\x2drbac\x2dproxy.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dproxy.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-secret\x2dprometheus\x2dk8s\x2dproxy.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-de160b09\x2da82e\x2d4c1c\x2d855b\x2d4dfb3b3cbd7c-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:23Z [verbose] Del: openshift-monitoring:alertmanager-main-1:457a2ca9-5414-414b-8731-42d2430a3275:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: I0223 17:13:23.399585 55844 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-794ab591ed722509cd1bb14af54479cdf18c9861e65e303c0102283c3df0ca41-merged.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-794ab591ed722509cd1bb14af54479cdf18c9861e65e303c0102283c3df0ca41-merged.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-utsns-13a1e099\x2d392b\x2d4597\x2d85ad\x2dd1b6663e05ea.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-utsns-13a1e099\x2d392b\x2d4597\x2d85ad\x2dd1b6663e05ea.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-ipcns-13a1e099\x2d392b\x2d4597\x2d85ad\x2dd1b6663e05ea.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-ipcns-13a1e099\x2d392b\x2d4597\x2d85ad\x2dd1b6663e05ea.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-netns-13a1e099\x2d392b\x2d4597\x2d85ad\x2dd1b6663e05ea.mount: Succeeded. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: run-netns-13a1e099\x2d392b\x2d4597\x2d85ad\x2dd1b6663e05ea.mount: Consumed 0 CPU time Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.370896128Z" level=info msg="Stopped pod sandbox: 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608" id=b0f4d117-d2f2-4ef4-be89-9507e6f135de name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.377906 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager-proxy/0.log" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.378214 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/config-reloader/0.log" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394438 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-tls-assets\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394466 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-tls\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394484 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-config-volume\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394505 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-config-out\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394526 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-metrics-client-ca\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394545 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-web-config\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394568 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-proxy\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394593 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394609 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcr8w\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-kube-api-access-mcr8w\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394628 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy-metric\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394644 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-trusted-ca-bundle\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.394696 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-main-db\") pod \"457a2ca9-5414-414b-8731-42d2430a3275\" (UID: \"457a2ca9-5414-414b-8731-42d2430a3275\") " Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:24.394900 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/457a2ca9-5414-414b-8731-42d2430a3275/volumes/kubernetes.io~empty-dir/alertmanager-main-db: clearQuota called, but quotas disabled Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.395060 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:24.395267 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/457a2ca9-5414-414b-8731-42d2430a3275/volumes/kubernetes.io~empty-dir/config-out: clearQuota called, but quotas disabled Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.395367 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-config-out" (OuterVolumeSpecName: "config-out") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:24.395497 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/457a2ca9-5414-414b-8731-42d2430a3275/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.395731 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:24.395865 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/457a2ca9-5414-414b-8731-42d2430a3275/volumes/kubernetes.io~configmap/alertmanager-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.396116 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.402037 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.402213 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.402246 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-config-volume" (OuterVolumeSpecName: "config-volume") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.403010 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-kube-api-access-mcr8w" (OuterVolumeSpecName: "kube-api-access-mcr8w") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "kube-api-access-mcr8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.405015 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.405108 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-proxy" (OuterVolumeSpecName: "secret-alertmanager-main-proxy") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "secret-alertmanager-main-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.408411 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/alertmanager-proxy/0.log" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.408774 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_457a2ca9-5414-414b-8731-42d2430a3275/config-reloader/0.log" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.408959 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:457a2ca9-5414-414b-8731-42d2430a3275 Type:ContainerDied Data:2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608} Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.408982 2112 scope.go:115] "RemoveContainer" containerID="3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.409605752Z" level=info msg="Removing container: 3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955" id=477e50bd-574f-49ea-bf45-b176ed9b64f9 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.410724 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-1_de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/prometheus-proxy/0.log" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.411186 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-1_de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/config-reloader/0.log" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.411718 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event=&{ID:de160b09-a82e-4c1c-855b-4dfb3b3cbd7c Type:ContainerDied Data:d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f} Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.414770 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.421133 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-web-config" (OuterVolumeSpecName: "web-config") pod "457a2ca9-5414-414b-8731-42d2430a3275" (UID: "457a2ca9-5414-414b-8731-42d2430a3275"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.463950551Z" level=info msg="Removed container 3e983220a0cecf9db7762e49baed79ac51235ba05947896fdf9e817c5a079955: openshift-monitoring/alertmanager-main-1/alertmanager" id=477e50bd-574f-49ea-bf45-b176ed9b64f9 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.464121 2112 scope.go:115] "RemoveContainer" containerID="f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.465236057Z" level=info msg="Removing container: f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173" id=ab5fc309-81d5-4181-8c7c-a32ef05d7972 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.495694727Z" level=info msg="Removed container f3cc58399f58d034a5ea692d08b921fd2849975f11374292d872563df6c87173: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=ab5fc309-81d5-4181-8c7c-a32ef05d7972 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.495849 2112 scope.go:115] "RemoveContainer" containerID="453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.496770755Z" level=info msg="Removing container: 453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02" id=5f795fcc-4947-4b96-b3b7-1100376a7788 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497349 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497382 2112 reconciler.go:399] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-web-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497400 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497415 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497428 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-mcr8w\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-kube-api-access-mcr8w\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497444 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497460 2112 reconciler.go:399] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497475 2112 reconciler.go:399] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-alertmanager-main-db\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497489 2112 reconciler.go:399] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457a2ca9-5414-414b-8731-42d2430a3275-tls-assets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497504 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-secret-alertmanager-main-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497519 2112 reconciler.go:399] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/457a2ca9-5414-414b-8731-42d2430a3275-config-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.497533 2112 reconciler.go:399] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457a2ca9-5414-414b-8731-42d2430a3275-config-out\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.515797818Z" level=info msg="Removed container 453dbcdcd062f0a5ca4b764fa2cd6efb52f9f36011e7143ba6d3083c773a1e02: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=5f795fcc-4947-4b96-b3b7-1100376a7788 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.516039 2112 scope.go:115] "RemoveContainer" containerID="9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.516833185Z" level=info msg="Removing container: 9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9" id=d7e5c9a7-5767-4f35-ac75-7572765552b3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.524113 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-k8s-1] Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.534045 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/prometheus-k8s-1] Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.537735897Z" level=info msg="Removed container 9aa60954acc454a8385d2204d9658fbee6e16bd996e0453fc2ed8fcbed420fe9: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=d7e5c9a7-5767-4f35-ac75-7572765552b3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.537903 2112 scope.go:115] "RemoveContainer" containerID="b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.538557255Z" level=info msg="Removing container: b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428" id=75f21086-c0e0-4190-9f5b-bc3f2fc38257 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.556454149Z" level=info msg="Removed container b01a99b1da48a49deccbb743ef250c07b23d71886bfefb512f6b56e0ddbd7428: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=75f21086-c0e0-4190-9f5b-bc3f2fc38257 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.556595 2112 scope.go:115] "RemoveContainer" containerID="c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.557253684Z" level=info msg="Removing container: c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73" id=484f7955-a55e-427a-b6fc-6d9e8c0847e7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.577592857Z" level=info msg="Removed container c356477d6c838eeac943481c9d122a0d516b24021e5421594692f64779dd9e73: openshift-monitoring/alertmanager-main-1/config-reloader" id=484f7955-a55e-427a-b6fc-6d9e8c0847e7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.578802 2112 scope.go:115] "RemoveContainer" containerID="b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.579560045Z" level=info msg="Removing container: b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915" id=9bfe3fa9-3ad5-4e0d-9959-5b3c116dc7c5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.596210633Z" level=info msg="Removed container b52f1634a1d1b8fb5a1f361706dc144fde2d738d90f34b380a6acd872d27b915: openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy-thanos" id=9bfe3fa9-3ad5-4e0d-9959-5b3c116dc7c5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.596447 2112 scope.go:115] "RemoveContainer" containerID="542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.597111294Z" level=info msg="Removing container: 542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19" id=65b3b719-b5ed-4906-86c9-c49267634d44 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.612830016Z" level=info msg="Removed container 542b507370698f9fd5344b11dd7d8d66b76926529af1de20af0e11394a0afb19: openshift-monitoring/prometheus-k8s-1/kube-rbac-proxy" id=65b3b719-b5ed-4906-86c9-c49267634d44 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.612976 2112 scope.go:115] "RemoveContainer" containerID="ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.613748910Z" level=info msg="Removing container: ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2" id=1e5ef6e7-d999-45e9-b080-a0271150599b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.639723483Z" level=info msg="Removed container ec1befd05c73d0a470f84208b4404cce7ffdc0e514af452f83f997d162763de2: openshift-monitoring/prometheus-k8s-1/prometheus-proxy" id=1e5ef6e7-d999-45e9-b080-a0271150599b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.639901 2112 scope.go:115] "RemoveContainer" containerID="6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.640686215Z" level=info msg="Removing container: 6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998" id=b87d072e-6859-47fa-bd6f-f3344f38f53e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.660893871Z" level=info msg="Removed container 6d2828a4d4dddccdf8c1cf9d27dbcc6e2be54b9071831894f861c06273b57998: openshift-monitoring/prometheus-k8s-1/thanos-sidecar" id=b87d072e-6859-47fa-bd6f-f3344f38f53e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.661093 2112 scope.go:115] "RemoveContainer" containerID="5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.661936009Z" level=info msg="Removing container: 5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98" id=0ce24f1f-5178-4cfb-b444-8c599c149e08 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.677619806Z" level=info msg="Removed container 5dd7cb88e4cf279c2edd098539ee513699de5578bc2e80ac3743ecaa4e60ed98: openshift-monitoring/prometheus-k8s-1/config-reloader" id=0ce24f1f-5178-4cfb-b444-8c599c149e08 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.677824 2112 scope.go:115] "RemoveContainer" containerID="ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.678575491Z" level=info msg="Removing container: ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82" id=f0e8bd91-345a-4a7d-9a79-bbf19b4b9d11 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.696139862Z" level=info msg="Removed container ea9418d427825c6bb4fb9eceebff590a8b483cf8a5ebc1e3a4054e3031de0c82: openshift-monitoring/prometheus-k8s-1/prometheus" id=f0e8bd91-345a-4a7d-9a79-bbf19b4b9d11 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.696301 2112 scope.go:115] "RemoveContainer" containerID="5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2" Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.697004919Z" level=info msg="Removing container: 5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2" id=97abdc0b-8f92-4dcb-800a-32db4a6eb0d5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod457a2ca9_5414_414b_8731_42d2430a3275.slice. Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod457a2ca9_5414_414b_8731_42d2430a3275.slice: Consumed 7.602s CPU time Feb 23 17:13:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:24.727457219Z" level=info msg="Removed container 5fe9d0d23be0cdb30613e0cffa98c335053b71da05d455b11c0525f7f262d8d2: openshift-monitoring/prometheus-k8s-1/init-config-reloader" id=97abdc0b-8f92-4dcb-800a-32db4a6eb0d5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.769753 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.783155 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897738 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897787 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.897876 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ff1ba18-ee4b-4151-95d3-ad4742635d6b" containerName="prometheus-operator-admission-webhook" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897888 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff1ba18-ee4b-4151-95d3-ad4742635d6b" containerName="prometheus-operator-admission-webhook" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.897897 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="kube-rbac-proxy-metric" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897905 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="kube-rbac-proxy-metric" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.897915 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="prom-label-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897922 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="prom-label-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.897931 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="prometheus-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897938 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="prometheus-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.897947 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="config-reloader" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897956 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="config-reloader" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.897965 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897972 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.897979 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="kube-rbac-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.897987 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="kube-rbac-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.897997 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="thanos-sidecar" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898004 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="thanos-sidecar" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.898012 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="init-config-reloader" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898019 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="init-config-reloader" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.898028 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="kube-rbac-proxy-thanos" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898035 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="kube-rbac-proxy-thanos" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.898044 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="prometheus" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898052 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="prometheus" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.898061 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898068 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.898077 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="kube-rbac-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898085 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="kube-rbac-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.898095 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898102 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:24.898110 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="config-reloader" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898117 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="config-reloader" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898166 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="config-reloader" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898177 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="kube-rbac-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898186 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="prom-label-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898193 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898201 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="3ff1ba18-ee4b-4151-95d3-ad4742635d6b" containerName="prometheus-operator-admission-webhook" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898209 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="config-reloader" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898219 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="thanos-sidecar" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898227 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="prometheus" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898236 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898245 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="prometheus-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898253 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="kube-rbac-proxy" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898261 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="de160b09-a82e-4c1c-855b-4dfb3b3cbd7c" containerName="kube-rbac-proxy-thanos" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898270 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="kube-rbac-proxy-metric" Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.898352 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="457a2ca9-5414-414b-8731-42d2430a3275" containerName="alertmanager" Feb 23 17:13:24 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod1e830162_e6ef_4be9_b95d_9b3c77530bc9.slice. Feb 23 17:13:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:24.921744 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.000908 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001043 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-volume\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001087 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001112 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001130 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-web-config\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001149 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001172 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-out\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001239 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001291 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001311 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqqnt\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-kube-api-access-sqqnt\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001334 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-tls-assets\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.001379 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.030966 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.101992 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102035 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-volume\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102068 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102096 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102125 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102150 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-web-config\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102178 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-out\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102208 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102241 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102268 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-sqqnt\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-kube-api-access-sqqnt\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102290 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.102316 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-tls-assets\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.104206 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.104320 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-out\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.105384 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.105430 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.105640 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-volume\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.105926 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-tls-assets\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.107050 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-web-config\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.107296 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.108654 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.108873 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.109691 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.119753 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqqnt\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-kube-api-access-sqqnt\") pod \"alertmanager-main-1\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.213996 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.214512060Z" level=info msg="Running pod sandbox: openshift-monitoring/alertmanager-main-1/POD" id=eb655a7d-7b16-498c-96de-19cf171b38b8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.214570828Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0748f7a04a756ffcb4806fbd46972fbe2704066bf93d6cba7ed37d317078342d-merged.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0748f7a04a756ffcb4806fbd46972fbe2704066bf93d6cba7ed37d317078342d-merged.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volume\x2dsubpaths-web\x2dconfig-alertmanager-9.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volume\x2dsubpaths-web\x2dconfig-alertmanager-9.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608-userdata-shm.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.235340521Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 UID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 NetNS:/var/run/netns/0753d7af-00f2-412f-ad65-07313c42b159 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.235370701Z" level=info msg="Adding pod openshift-monitoring_alertmanager-main-1 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmcr8w.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmcr8w.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dtls.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dtls.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dproxy.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dproxy.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy\x2dmetric.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy\x2dmetric.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-config\x2dvolume.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-config\x2dvolume.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Succeeded. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-457a2ca9\x2d5414\x2d414b\x2d8731\x2d42d2430a3275-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Consumed 0 CPU time Feb 23 17:13:25 ip-10-0-136-68 systemd-udevd[56076]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:13:25 ip-10-0-136-68 systemd-udevd[56076]: Could not generate persistent MAC address for 7c6c96994776dec: No such file or directory Feb 23 17:13:25 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 7c6c96994776dec: link is not ready Feb 23 17:13:25 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 7c6c96994776dec: link becomes ready Feb 23 17:13:25 ip-10-0-136-68 NetworkManager[1147]: [1677172405.3950] device (7c6c96994776dec): carrier: link connected Feb 23 17:13:25 ip-10-0-136-68 NetworkManager[1147]: [1677172405.3954] manager: (7c6c96994776dec): new Veth device (/org/freedesktop/NetworkManager/Devices/52) Feb 23 17:13:25 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00311|bridge|INFO|bridge br-int: added interface 7c6c96994776dec on port 21 Feb 23 17:13:25 ip-10-0-136-68 NetworkManager[1147]: [1677172405.4210] manager: (7c6c96994776dec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53) Feb 23 17:13:25 ip-10-0-136-68 kernel: device 7c6c96994776dec entered promiscuous mode Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:25.517121 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: I0223 17:13:25.368112 56066 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:25Z [verbose] Add: openshift-monitoring:alertmanager-main-1:1e830162-e6ef-4be9-b95d-9b3c77530bc9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7c6c96994776dec","mac":"1e:65:67:0e:ba:e0"},{"name":"eth0","mac":"0a:58:0a:81:02:17","sandbox":"/var/run/netns/0753d7af-00f2-412f-ad65-07313c42b159"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.23/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: I0223 17:13:25.472707 56054 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"alertmanager-main-1", UID:"1e830162-e6ef-4be9-b95d-9b3c77530bc9", APIVersion:"v1", ResourceVersion:"67624", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.23/23] from ovn-kubernetes Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.519020374Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 UID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 NetNS:/var/run/netns/0753d7af-00f2-412f-ad65-07313c42b159 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.519151796Z" level=info msg="Checking pod openshift-monitoring_alertmanager-main-1 for CNI network multus-cni-network (type=multus)" Feb 23 17:13:25 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:25.521457 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e830162_e6ef_4be9_b95d_9b3c77530bc9.slice/crio-7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59.scope WatchSource:0}: Error finding container 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59: Status 404 returned error can't find the container with id 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.523304054Z" level=info msg="Ran pod sandbox 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 with infra container: openshift-monitoring/alertmanager-main-1/POD" id=eb655a7d-7b16-498c-96de-19cf171b38b8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.524208250Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=067ac797-2825-4112-b434-665948599a2d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.524431154Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=067ac797-2825-4112-b434-665948599a2d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.527749947Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=cc0eca0f-be7d-4abb-98c8-4d68d1fe5a89 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.527905629Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=cc0eca0f-be7d-4abb-98c8-4d68d1fe5a89 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.528949605Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager" id=1c9bd41c-9e9d-481a-b110-64fb86346b84 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.529014104Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: Started crio-conmon-fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725.scope. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: Started libcontainer container fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725. Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.653511438Z" level=info msg="Created container fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725: openshift-monitoring/alertmanager-main-1/alertmanager" id=1c9bd41c-9e9d-481a-b110-64fb86346b84 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.653959650Z" level=info msg="Starting container: fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" id=249c52d4-90e4-4de6-a608-f26e70f25091 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.660641871Z" level=info msg="Started container" PID=56109 containerID=fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 description=openshift-monitoring/alertmanager-main-1/alertmanager id=249c52d4-90e4-4de6-a608-f26e70f25091 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.675353425Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=a18ee83a-9927-4538-94e8-e2e6ae33b91d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.675570893Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a18ee83a-9927-4538-94e8-e2e6ae33b91d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.677609361Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008" id=e7d2b113-ad91-44cf-8c1e-cc7edd9f2365 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.677885348Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba9942c6b8556723b2ced172e5fb9c827a5aacee27aa66e7952a42d2b636240d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:54d3aea3fb46824c7af49ff90c30c44c5409b64745f649c1e19f43c58dab6008],Size_:359319908,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e7d2b113-ad91-44cf-8c1e-cc7edd9f2365 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.678736340Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/config-reloader" id=0419d2f9-dbfe-4521-9823-bb4d7880772b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.678836344Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: Started crio-conmon-03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce.scope. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: Started libcontainer container 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce. Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.793733504Z" level=info msg="Created container 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce: openshift-monitoring/alertmanager-main-1/config-reloader" id=0419d2f9-dbfe-4521-9823-bb4d7880772b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.794170625Z" level=info msg="Starting container: 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" id=cfccd812-88a9-4c8b-afc0-a8f665b825f6 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.801118648Z" level=info msg="Started container" PID=56153 containerID=03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce description=openshift-monitoring/alertmanager-main-1/config-reloader id=cfccd812-88a9-4c8b-afc0-a8f665b825f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.809654374Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=cfe6c4bd-2f1f-4e5c-9030-c71add648828 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.809867028Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=cfe6c4bd-2f1f-4e5c-9030-c71add648828 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.810399044Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=18610911-e73d-4389-9aac-c3e7007a9e51 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.810533695Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=18610911-e73d-4389-9aac-c3e7007a9e51 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.811230509Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=abb4c074-0af4-4e03-9971-9df440391470 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.811310471Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: Started crio-conmon-da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2.scope. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: Started libcontainer container da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2. Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.924725380Z" level=info msg="Created container da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=abb4c074-0af4-4e03-9971-9df440391470 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.925169565Z" level=info msg="Starting container: da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" id=0be22a25-8231-41ca-ae4f-9daf3b8d0273 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.931948998Z" level=info msg="Started container" PID=56194 containerID=da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 description=openshift-monitoring/alertmanager-main-1/alertmanager-proxy id=0be22a25-8231-41ca-ae4f-9daf3b8d0273 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.940086567Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=0e994900-c064-44b8-b706-e529e2208b6d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.940271930Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0e994900-c064-44b8-b706-e529e2208b6d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.941282622Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=7f42e494-f5d2-4832-bd6f-72ef3914d423 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.941454398Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7f42e494-f5d2-4832-bd6f-72ef3914d423 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.942354312Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=bdae0b42-7e34-4d46-8b24-7866edfad395 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:25.942519477Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: Started crio-conmon-58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434.scope. Feb 23 17:13:25 ip-10-0-136-68 systemd[1]: Started libcontainer container 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434. Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.062599966Z" level=info msg="Created container 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=bdae0b42-7e34-4d46-8b24-7866edfad395 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.063052935Z" level=info msg="Starting container: 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" id=2c21ae27-49cc-42cd-b8c9-c1f50c21232d name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.069595749Z" level=info msg="Started container" PID=56246 containerID=58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 description=openshift-monitoring/alertmanager-main-1/kube-rbac-proxy id=2c21ae27-49cc-42cd-b8c9-c1f50c21232d name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.080090902Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=674abb09-0d3a-4cd8-be93-4adce9eb03d6 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.080281739Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=674abb09-0d3a-4cd8-be93-4adce9eb03d6 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.081037362Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=dc98490c-3269-4d47-b73c-bb3a69e11470 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.081213960Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dc98490c-3269-4d47-b73c-bb3a69e11470 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.082126001Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=74c493af-2fa8-44e3-bdce-6c98252f078a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.082211061Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: Started crio-conmon-e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da.scope. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: Started libcontainer container e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da. Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.120861 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=457a2ca9-5414-414b-8731-42d2430a3275 path="/var/lib/kubelet/pods/457a2ca9-5414-414b-8731-42d2430a3275/volumes" Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.121949 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=de160b09-a82e-4c1c-855b-4dfb3b3cbd7c path="/var/lib/kubelet/pods/de160b09-a82e-4c1c-855b-4dfb3b3cbd7c/volumes" Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.199816671Z" level=info msg="Created container e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=74c493af-2fa8-44e3-bdce-6c98252f078a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.200240565Z" level=info msg="Starting container: e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" id=2f2ab9da-2c28-4dc0-80a9-8564e11173c0 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.207368387Z" level=info msg="Started container" PID=56291 containerID=e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da description=openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric id=2f2ab9da-2c28-4dc0-80a9-8564e11173c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.225840713Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=c17f5f8d-6f68-4c53-86de-06dafc03c6c1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.226041648Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c17f5f8d-6f68-4c53-86de-06dafc03c6c1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.226741506Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=6885d4cb-09cc-41f4-af08-e2b5829759fa name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.226896598Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6885d4cb-09cc-41f4-af08-e2b5829759fa name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.227839851Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=aafc5b3a-3f7a-48cd-8e60-8cca92177128 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.227919200Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: Started crio-conmon-b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf.scope. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: Started libcontainer container b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf. Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.336637095Z" level=info msg="Created container b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=aafc5b3a-3f7a-48cd-8e60-8cca92177128 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.337077615Z" level=info msg="Starting container: b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" id=dd7fd379-b2ba-4511-9635-66f693adf5fe name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.344430152Z" level=info msg="Started container" PID=56337 containerID=b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf description=openshift-monitoring/alertmanager-main-1/prom-label-proxy id=dd7fd379-b2ba-4511-9635-66f693adf5fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418225 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerStarted Data:b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf} Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418265 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerStarted Data:e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da} Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418283 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerStarted Data:58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434} Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418298 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerStarted Data:da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2} Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418309 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerStarted Data:03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce} Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418322 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerStarted Data:fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725} Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418332 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerStarted Data:7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59} Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418400 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="alertmanager" containerID="cri-o://fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" gracePeriod=120 Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418641 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="prom-label-proxy" containerID="cri-o://b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" gracePeriod=120 Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418697 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="alertmanager-proxy" containerID="cri-o://da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" gracePeriod=120 Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418697 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="kube-rbac-proxy" containerID="cri-o://58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" gracePeriod=120 Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418739 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="config-reloader" containerID="cri-o://03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" gracePeriod=120 Feb 23 17:13:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:26.418866 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="kube-rbac-proxy-metric" containerID="cri-o://e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" gracePeriod=120 Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.418877654Z" level=info msg="Stopping container: 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce (timeout: 120s)" id=13ab585b-73dc-48ff-9200-f72509ee95e1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.418909451Z" level=info msg="Stopping container: 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 (timeout: 120s)" id=0ec3758f-15f5-4277-9904-769f1820babd name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.418963842Z" level=info msg="Stopping container: e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da (timeout: 120s)" id=935e7e35-e94d-4589-a67e-0e0936f467a5 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.418884205Z" level=info msg="Stopping container: b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf (timeout: 120s)" id=3aa527a2-0a11-41cd-95e2-92cfc9c9df4a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.418897003Z" level=info msg="Stopping container: fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 (timeout: 120s)" id=749974cf-df00-4de7-b03b-7eb897995be3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.418920502Z" level=info msg="Stopping container: da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 (timeout: 120s)" id=06fadfc5-0ef8-4a33-82fb-4fb90f9f08a7 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434.scope: Consumed 55ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434.scope: Consumed 23ms CPU time Feb 23 17:13:26 ip-10-0-136-68 conmon[56096]: conmon fd28eed422a20d57b6fa : container 56109 exited with status 2 Feb 23 17:13:26 ip-10-0-136-68 conmon[56182]: conmon da3eb2b5e326d99ebe1a : container 56194 exited with status 2 Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725.scope: Consumed 91ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2.scope: Consumed 62ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725.scope: Consumed 24ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf.scope: Consumed 27ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da.scope: Consumed 54ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf.scope: Consumed 24ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2.scope: Consumed 23ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da.scope: Consumed 24ms CPU time Feb 23 17:13:26 ip-10-0-136-68 conmon[56140]: conmon 03b03d687090af20c902 : container 56153 exited with status 2 Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce.scope: Consumed 28ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce.scope: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: crio-conmon-03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce.scope: Consumed 23ms CPU time Feb 23 17:13:26 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4a7f46cdf893d00de49bf0f4b44c4e77284d6ad042fd74189593233ebb283d4a-merged.mount: Succeeded. Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.598636848Z" level=info msg="Stopped container e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=935e7e35-e94d-4589-a67e-0e0936f467a5 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.607956022Z" level=info msg="Stopped container 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=0ec3758f-15f5-4277-9904-769f1820babd name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.618212626Z" level=info msg="Stopped container b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=3aa527a2-0a11-41cd-95e2-92cfc9c9df4a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.641134959Z" level=info msg="Stopped container da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=06fadfc5-0ef8-4a33-82fb-4fb90f9f08a7 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.661683681Z" level=info msg="Stopped container fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725: openshift-monitoring/alertmanager-main-1/alertmanager" id=749974cf-df00-4de7-b03b-7eb897995be3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.680305464Z" level=info msg="Stopped container 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce: openshift-monitoring/alertmanager-main-1/config-reloader" id=13ab585b-73dc-48ff-9200-f72509ee95e1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.680592400Z" level=info msg="Stopping pod sandbox: 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59" id=e0dcd4e3-b6da-436d-b344-bb7a8a6cc217 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.680806043Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59 UID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 NetNS:/var/run/netns/0753d7af-00f2-412f-ad65-07313c42b159 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:26.680932671Z" level=info msg="Deleting pod openshift-monitoring_alertmanager-main-1 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00312|bridge|INFO|bridge br-int: deleted interface 7c6c96994776dec on port 21 Feb 23 17:13:26 ip-10-0-136-68 kernel: device 7c6c96994776dec left promiscuous mode Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8e74a6d389bf4318091f2d095304e60b71f94a18f279d0aee7291722d52ccefd-merged.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-fbdce26e8445ba6f96d3ba77012a4cd712296c56e06266a1d60f6ea323e237cd-merged.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5572a0cf4db90c227538c6b82c6aa77fb964d013b7f5cd94a0b994b9094b9c4b-merged.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4c5f1f0f64f02b43c30b98949d4ce5e72c787baf24020e28ee4464791856d92a-merged.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-657b19fa500f93cbc01afd19040cae2ad319731768b9b3910a690e4563de7954-merged.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:26Z [verbose] Del: openshift-monitoring:alertmanager-main-1:1e830162-e6ef-4be9-b95d-9b3c77530bc9:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: I0223 17:13:26.839382 56565 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c8af50c5ab2c68292b7afc09794eadc45249cb0b8c9549e30893d538d2cbbc1b-merged.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: run-utsns-0753d7af\x2d00f2\x2d412f\x2dad65\x2d07313c42b159.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: run-ipcns-0753d7af\x2d00f2\x2d412f\x2dad65\x2d07313c42b159.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: run-netns-0753d7af\x2d00f2\x2d412f\x2dad65\x2d07313c42b159.mount: Succeeded. Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.350804424Z" level=info msg="Stopped pod sandbox: 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59" id=e0dcd4e3-b6da-436d-b344-bb7a8a6cc217 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.356788 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_1e830162-e6ef-4be9-b95d-9b3c77530bc9/alertmanager-proxy/0.log" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.357012 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_1e830162-e6ef-4be9-b95d-9b3c77530bc9/config-reloader/0.log" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.357235 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_1e830162-e6ef-4be9-b95d-9b3c77530bc9/alertmanager/0.log" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422039 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_1e830162-e6ef-4be9-b95d-9b3c77530bc9/alertmanager-proxy/0.log" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422311 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_1e830162-e6ef-4be9-b95d-9b3c77530bc9/config-reloader/0.log" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422696 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_1e830162-e6ef-4be9-b95d-9b3c77530bc9/alertmanager/0.log" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422738 2112 generic.go:296] "Generic (PLEG): container finished" podID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" exitCode=0 Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422754 2112 generic.go:296] "Generic (PLEG): container finished" podID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" exitCode=0 Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422767 2112 generic.go:296] "Generic (PLEG): container finished" podID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" exitCode=0 Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422781 2112 generic.go:296] "Generic (PLEG): container finished" podID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" exitCode=2 Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422794 2112 generic.go:296] "Generic (PLEG): container finished" podID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" exitCode=2 Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422809 2112 generic.go:296] "Generic (PLEG): container finished" podID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" exitCode=2 Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422834 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerDied Data:b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf} Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422859 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerDied Data:e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da} Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422876 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerDied Data:58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434} Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422892 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerDied Data:da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2} Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422907 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerDied Data:03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce} Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422920 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerDied Data:fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725} Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422934 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:1e830162-e6ef-4be9-b95d-9b3c77530bc9 Type:ContainerDied Data:7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59} Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.422951 2112 scope.go:115] "RemoveContainer" containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.423634245Z" level=info msg="Removing container: b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" id=a486ad76-ed1f-475a-b9e5-5181626eb631 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.442444905Z" level=info msg="Removed container b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=a486ad76-ed1f-475a-b9e5-5181626eb631 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.442570 2112 scope.go:115] "RemoveContainer" containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.443188210Z" level=info msg="Removing container: e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" id=bad77353-ada3-4201-bc05-c3ae771dba7c name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.461219332Z" level=info msg="Removed container e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=bad77353-ada3-4201-bc05-c3ae771dba7c name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.461351 2112 scope.go:115] "RemoveContainer" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.461996209Z" level=info msg="Removing container: 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" id=af7165b9-7ca0-4598-8cd5-617955a61967 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.478270203Z" level=info msg="Removed container 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=af7165b9-7ca0-4598-8cd5-617955a61967 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.478402 2112 scope.go:115] "RemoveContainer" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.479046740Z" level=info msg="Removing container: da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" id=c4446b12-5c9f-4404-a321-107cba1ab09d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.501828259Z" level=info msg="Removed container da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=c4446b12-5c9f-4404-a321-107cba1ab09d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.501999 2112 scope.go:115] "RemoveContainer" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.502764311Z" level=info msg="Removing container: 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" id=e9cf0bc4-16cf-4c85-8a6e-015b4c965c6f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521365 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-web-config\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521419 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-trusted-ca-bundle\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521458 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-out\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521498 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqqnt\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-kube-api-access-sqqnt\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521532 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy-metric\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521559 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-metrics-client-ca\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521591 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521621 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-proxy\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.521973 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-tls-assets\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.522012 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-volume\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.522043 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-tls\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.522072 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-main-db\") pod \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\" (UID: \"1e830162-e6ef-4be9-b95d-9b3c77530bc9\") " Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:27.522339 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1e830162-e6ef-4be9-b95d-9b3c77530bc9/volumes/kubernetes.io~empty-dir/alertmanager-main-db: clearQuota called, but quotas disabled Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.522379 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:27.522717 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1e830162-e6ef-4be9-b95d-9b3c77530bc9/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.522897 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:27.523033 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1e830162-e6ef-4be9-b95d-9b3c77530bc9/volumes/kubernetes.io~configmap/alertmanager-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.523235 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:27.523350 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1e830162-e6ef-4be9-b95d-9b3c77530bc9/volumes/kubernetes.io~empty-dir/config-out: clearQuota called, but quotas disabled Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.523432 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-out" (OuterVolumeSpecName: "config-out") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00313|connmgr|INFO|br-ex<->unix#1148: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.537159 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.537861 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-kube-api-access-sqqnt" (OuterVolumeSpecName: "kube-api-access-sqqnt") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "kube-api-access-sqqnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.539184 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-volume" (OuterVolumeSpecName: "config-volume") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.540030 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.541818 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-proxy" (OuterVolumeSpecName: "secret-alertmanager-main-proxy") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "secret-alertmanager-main-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.543244 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.543479 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.547371752Z" level=info msg="Removed container 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce: openshift-monitoring/alertmanager-main-1/config-reloader" id=e9cf0bc4-16cf-4c85-8a6e-015b4c965c6f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.547608 2112 scope.go:115] "RemoveContainer" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.548491619Z" level=info msg="Removing container: fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" id=f1c7651c-3e9c-4433-a214-9a544f093885 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.550134 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-web-config" (OuterVolumeSpecName: "web-config") pod "1e830162-e6ef-4be9-b95d-9b3c77530bc9" (UID: "1e830162-e6ef-4be9-b95d-9b3c77530bc9"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:27.567834289Z" level=info msg="Removed container fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725: openshift-monitoring/alertmanager-main-1/alertmanager" id=f1c7651c-3e9c-4433-a214-9a544f093885 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.568037 2112 scope.go:115] "RemoveContainer" containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.568279 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": container with ID starting with b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf not found: ID does not exist" containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.568315 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf} err="failed to get container status \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": rpc error: code = NotFound desc = could not find container \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": container with ID starting with b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.568328 2112 scope.go:115] "RemoveContainer" containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.568522 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": container with ID starting with e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da not found: ID does not exist" containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.568545 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da} err="failed to get container status \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": rpc error: code = NotFound desc = could not find container \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": container with ID starting with e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.568553 2112 scope.go:115] "RemoveContainer" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.568781 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": container with ID starting with 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 not found: ID does not exist" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.568806 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434} err="failed to get container status \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": rpc error: code = NotFound desc = could not find container \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": container with ID starting with 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.568814 2112 scope.go:115] "RemoveContainer" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.569022 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": container with ID starting with da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 not found: ID does not exist" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.569057 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2} err="failed to get container status \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": rpc error: code = NotFound desc = could not find container \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": container with ID starting with da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.569070 2112 scope.go:115] "RemoveContainer" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.569284 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": container with ID starting with 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce not found: ID does not exist" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.569315 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce} err="failed to get container status \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": rpc error: code = NotFound desc = could not find container \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": container with ID starting with 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.569327 2112 scope.go:115] "RemoveContainer" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.569527 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": container with ID starting with fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 not found: ID does not exist" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.569545 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725} err="failed to get container status \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": rpc error: code = NotFound desc = could not find container \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": container with ID starting with fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.569555 2112 scope.go:115] "RemoveContainer" containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.569832 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf} err="failed to get container status \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": rpc error: code = NotFound desc = could not find container \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": container with ID starting with b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.569850 2112 scope.go:115] "RemoveContainer" containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570088 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da} err="failed to get container status \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": rpc error: code = NotFound desc = could not find container \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": container with ID starting with e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570104 2112 scope.go:115] "RemoveContainer" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570355 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434} err="failed to get container status \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": rpc error: code = NotFound desc = could not find container \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": container with ID starting with 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570368 2112 scope.go:115] "RemoveContainer" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570558 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2} err="failed to get container status \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": rpc error: code = NotFound desc = could not find container \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": container with ID starting with da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570594 2112 scope.go:115] "RemoveContainer" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570806 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce} err="failed to get container status \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": rpc error: code = NotFound desc = could not find container \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": container with ID starting with 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570822 2112 scope.go:115] "RemoveContainer" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570976 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725} err="failed to get container status \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": rpc error: code = NotFound desc = could not find container \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": container with ID starting with fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.570994 2112 scope.go:115] "RemoveContainer" containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.571199 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf} err="failed to get container status \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": rpc error: code = NotFound desc = could not find container \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": container with ID starting with b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.571217 2112 scope.go:115] "RemoveContainer" containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.571443 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da} err="failed to get container status \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": rpc error: code = NotFound desc = could not find container \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": container with ID starting with e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.571457 2112 scope.go:115] "RemoveContainer" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.571707 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434} err="failed to get container status \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": rpc error: code = NotFound desc = could not find container \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": container with ID starting with 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.571721 2112 scope.go:115] "RemoveContainer" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.571892 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2} err="failed to get container status \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": rpc error: code = NotFound desc = could not find container \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": container with ID starting with da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.571911 2112 scope.go:115] "RemoveContainer" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572089 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce} err="failed to get container status \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": rpc error: code = NotFound desc = could not find container \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": container with ID starting with 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572106 2112 scope.go:115] "RemoveContainer" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572334 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725} err="failed to get container status \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": rpc error: code = NotFound desc = could not find container \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": container with ID starting with fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572350 2112 scope.go:115] "RemoveContainer" containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572501 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf} err="failed to get container status \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": rpc error: code = NotFound desc = could not find container \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": container with ID starting with b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572516 2112 scope.go:115] "RemoveContainer" containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572706 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da} err="failed to get container status \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": rpc error: code = NotFound desc = could not find container \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": container with ID starting with e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572723 2112 scope.go:115] "RemoveContainer" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572902 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434} err="failed to get container status \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": rpc error: code = NotFound desc = could not find container \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": container with ID starting with 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.572917 2112 scope.go:115] "RemoveContainer" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.573137 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2} err="failed to get container status \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": rpc error: code = NotFound desc = could not find container \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": container with ID starting with da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.573149 2112 scope.go:115] "RemoveContainer" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.573367 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce} err="failed to get container status \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": rpc error: code = NotFound desc = could not find container \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": container with ID starting with 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.573383 2112 scope.go:115] "RemoveContainer" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.573604 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725} err="failed to get container status \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": rpc error: code = NotFound desc = could not find container \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": container with ID starting with fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.573633 2112 scope.go:115] "RemoveContainer" containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.573839 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf} err="failed to get container status \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": rpc error: code = NotFound desc = could not find container \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": container with ID starting with b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.573852 2112 scope.go:115] "RemoveContainer" containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574073 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da} err="failed to get container status \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": rpc error: code = NotFound desc = could not find container \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": container with ID starting with e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574088 2112 scope.go:115] "RemoveContainer" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574301 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434} err="failed to get container status \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": rpc error: code = NotFound desc = could not find container \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": container with ID starting with 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574314 2112 scope.go:115] "RemoveContainer" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574453 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2} err="failed to get container status \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": rpc error: code = NotFound desc = could not find container \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": container with ID starting with da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574473 2112 scope.go:115] "RemoveContainer" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574712 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce} err="failed to get container status \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": rpc error: code = NotFound desc = could not find container \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": container with ID starting with 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574728 2112 scope.go:115] "RemoveContainer" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574960 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725} err="failed to get container status \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": rpc error: code = NotFound desc = could not find container \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": container with ID starting with fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.574978 2112 scope.go:115] "RemoveContainer" containerID="b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575160 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf} err="failed to get container status \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": rpc error: code = NotFound desc = could not find container \"b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf\": container with ID starting with b5c25d6bee1dc58ffa1c378d0eeab297bd11dc2c50a85d3547e113a4652d6dcf not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575174 2112 scope.go:115] "RemoveContainer" containerID="e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575338 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da} err="failed to get container status \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": rpc error: code = NotFound desc = could not find container \"e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da\": container with ID starting with e287676c2885d342e89355a473f37ba823f60f7ce4dd3ee7089d108f32da51da not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575353 2112 scope.go:115] "RemoveContainer" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575569 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434} err="failed to get container status \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": rpc error: code = NotFound desc = could not find container \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": container with ID starting with 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575580 2112 scope.go:115] "RemoveContainer" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575761 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2} err="failed to get container status \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": rpc error: code = NotFound desc = could not find container \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": container with ID starting with da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575780 2112 scope.go:115] "RemoveContainer" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575914 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce} err="failed to get container status \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": rpc error: code = NotFound desc = could not find container \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": container with ID starting with 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.575928 2112 scope.go:115] "RemoveContainer" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.576038 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725} err="failed to get container status \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": rpc error: code = NotFound desc = could not find container \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": container with ID starting with fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 not found: ID does not exist" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622817 2112 reconciler.go:399] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-web-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622847 2112 reconciler.go:399] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622862 2112 reconciler.go:399] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-out\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622876 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-sqqnt\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-kube-api-access-sqqnt\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622892 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622905 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e830162-e6ef-4be9-b95d-9b3c77530bc9-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622920 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622935 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622951 2112 reconciler.go:399] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e830162-e6ef-4be9-b95d-9b3c77530bc9-tls-assets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622967 2112 reconciler.go:399] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-config-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622982 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1e830162-e6ef-4be9-b95d-9b3c77530bc9-secret-alertmanager-main-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.622998 2112 reconciler.go:399] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1e830162-e6ef-4be9-b95d-9b3c77530bc9-alertmanager-main-db\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod1e830162_e6ef_4be9_b95d_9b3c77530bc9.slice. Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod1e830162_e6ef_4be9_b95d_9b3c77530bc9.slice: Consumed 463ms CPU time Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.784085 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.800876 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962203 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962251 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.962316 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="kube-rbac-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962326 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="kube-rbac-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.962338 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="alertmanager" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962344 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="alertmanager" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.962353 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="prom-label-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962360 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="prom-label-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.962369 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="alertmanager-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962375 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="alertmanager-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.962384 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="config-reloader" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962392 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="config-reloader" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:27.962402 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="kube-rbac-proxy-metric" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962409 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="kube-rbac-proxy-metric" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962460 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="kube-rbac-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962470 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="config-reloader" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962479 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="kube-rbac-proxy-metric" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962488 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="alertmanager-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962497 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="prom-label-proxy" Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.962508 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="1e830162-e6ef-4be9-b95d-9b3c77530bc9" containerName="alertmanager" Feb 23 17:13:27 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podcd707766_c226_46f5_b391_aa1689d95e81.slice. Feb 23 17:13:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:27.989951 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.068851 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.117331 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="alertmanager" containerID="cri-o://fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" gracePeriod=1 Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.117349 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="alertmanager-proxy" containerID="cri-o://da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" gracePeriod=1 Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.117398 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="kube-rbac-proxy" containerID="cri-o://58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" gracePeriod=1 Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.117409 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 containerName="config-reloader" containerID="cri-o://03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" gracePeriod=1 Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.117519187Z" level=info msg="Stopping container: da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 (timeout: 1s)" id=6bb3cc46-971a-4b7f-a3ab-2b67d8d38228 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.117596836Z" level=info msg="Stopping container: 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce (timeout: 1s)" id=7ba98473-12e5-44ab-9572-751f427f8e38 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.117620082Z" level=info msg="Stopping container: 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 (timeout: 1s)" id=c5acfe95-fc10-4cc8-af2d-541d1a738cc2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.117605991Z" level=info msg="Stopping container: fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 (timeout: 1s)" id=8fdb64ed-5a89-4a28-8984-bc228cd45266 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.117963646Z" level=info msg="Stopping pod sandbox: 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59" id=88d2e64d-fc93-4dd8-975e-0d81c6fa96fe name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.118012233Z" level=info msg="Stopped pod sandbox (already stopped): 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59" id=88d2e64d-fc93-4dd8-975e-0d81c6fa96fe name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:28.117782 2112 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434\": container with ID starting with 58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434 not found: ID does not exist" containerID="58462d29187aae061ec464c4a8dc3abb8f50d20e92ab6580d0eb71f1d244f434" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:28.117817 2112 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2\": container with ID starting with da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2 not found: ID does not exist" containerID="da3eb2b5e326d99ebe1aab422168a87b28ecd5482c7f728b81076ff74639a9d2" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:28.117824 2112 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce\": container with ID starting with 03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce not found: ID does not exist" containerID="03b03d687090af20c902f9399cdba7b6501f448ab06b32e4569f7dc981db36ce" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:28.117869 2112 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725\": container with ID starting with fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725 not found: ID does not exist" containerID="fd28eed422a20d57b6fa6081fad2c0f6b5d63282eac2bae4e6e0fd0082c10725" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.118865 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1e830162-e6ef-4be9-b95d-9b3c77530bc9 path="/var/lib/kubelet/pods/1e830162-e6ef-4be9-b95d-9b3c77530bc9/volumes" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125549 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-tls-assets\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125581 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125601 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125632 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125770 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125813 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125874 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-web-config\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125907 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-config-out\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125960 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfdgs\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-kube-api-access-kfdgs\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.125999 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-config-volume\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.126022 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.126046 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volume\x2dsubpaths-web\x2dconfig-alertmanager-9.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59-userdata-shm.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsqqnt.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dproxy.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy\x2dmetric.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dtls.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1e830162\x2de6ef\x2d4be9\x2db95d\x2d9b3c77530bc9-volumes-kubernetes.io\x7esecret-config\x2dvolume.mount: Succeeded. Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226464 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226502 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226531 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-web-config\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226559 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-config-out\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226587 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-kfdgs\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-kube-api-access-kfdgs\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226616 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-config-volume\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226643 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226708 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226735 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-tls-assets\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226772 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226803 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.226855 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.227823 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.228056 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.229438 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-config-out\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.236033 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.236187 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.236248 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-web-config\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.236401 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.238497 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-tls-assets\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.239077 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-config-volume\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.239245 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.239284 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.263558 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfdgs\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-kube-api-access-kfdgs\") pod \"alertmanager-main-1\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.277454 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.277914383Z" level=info msg="Running pod sandbox: openshift-monitoring/alertmanager-main-1/POD" id=44c61b98-8c08-4168-9cb5-0a0e22b694aa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.277969499Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.295926848Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a UID:cd707766-c226-46f5-b391-aa1689d95e81 NetNS:/var/run/netns/40b38e0a-d519-47fc-a9db-2e617e9024f3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.295950615Z" level=info msg="Adding pod openshift-monitoring_alertmanager-main-1 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:28 ip-10-0-136-68 systemd-udevd[56671]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:13:28 ip-10-0-136-68 systemd-udevd[56671]: Could not generate persistent MAC address for 663ed760da6d8b1: No such file or directory Feb 23 17:13:28 ip-10-0-136-68 NetworkManager[1147]: [1677172408.4576] device (663ed760da6d8b1): carrier: link connected Feb 23 17:13:28 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 663ed760da6d8b1: link is not ready Feb 23 17:13:28 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:13:28 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:13:28 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 663ed760da6d8b1: link becomes ready Feb 23 17:13:28 ip-10-0-136-68 NetworkManager[1147]: [1677172408.4610] manager: (663ed760da6d8b1): new Veth device (/org/freedesktop/NetworkManager/Devices/54) Feb 23 17:13:28 ip-10-0-136-68 NetworkManager[1147]: [1677172408.4836] manager: (663ed760da6d8b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55) Feb 23 17:13:28 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00314|bridge|INFO|bridge br-int: added interface 663ed760da6d8b1 on port 22 Feb 23 17:13:28 ip-10-0-136-68 kernel: device 663ed760da6d8b1 entered promiscuous mode Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:28.570400 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: I0223 17:13:28.438181 56661 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:28Z [verbose] Add: openshift-monitoring:alertmanager-main-1:cd707766-c226-46f5-b391-aa1689d95e81:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"663ed760da6d8b1","mac":"1a:99:18:13:d4:ea"},{"name":"eth0","mac":"0a:58:0a:81:02:18","sandbox":"/var/run/netns/40b38e0a-d519-47fc-a9db-2e617e9024f3"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.24/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: I0223 17:13:28.537320 56654 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"alertmanager-main-1", UID:"cd707766-c226-46f5-b391-aa1689d95e81", APIVersion:"v1", ResourceVersion:"67881", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.24/23] from ovn-kubernetes Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.572829445Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a UID:cd707766-c226-46f5-b391-aa1689d95e81 NetNS:/var/run/netns/40b38e0a-d519-47fc-a9db-2e617e9024f3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.572963815Z" level=info msg="Checking pod openshift-monitoring_alertmanager-main-1 for CNI network multus-cni-network (type=multus)" Feb 23 17:13:28 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:28.575186 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd707766_c226_46f5_b391_aa1689d95e81.slice/crio-663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a.scope WatchSource:0}: Error finding container 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a: Status 404 returned error can't find the container with id 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.576568894Z" level=info msg="Ran pod sandbox 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a with infra container: openshift-monitoring/alertmanager-main-1/POD" id=44c61b98-8c08-4168-9cb5-0a0e22b694aa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.577287012Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=5eac8313-97d6-4a10-ac31-8ff67566089a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.577431541Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5eac8313-97d6-4a10-ac31-8ff67566089a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.580567410Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736" id=56517188-8395-4f56-b6d9-f2f1e55b75e7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.580753925Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee2df3b6c5f959807b3fab8b0b30c981e2f43ef273dfbbbf5bb9a469aeeb3d8d,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fb9be9744b6ce1f692325241b066184282ea7ee06dbd191a0f19b55b0ad8736],Size_:367066685,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=56517188-8395-4f56-b6d9-f2f1e55b75e7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.581582506Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager" id=ca788abb-de89-4c6f-bf25-3341774cc06e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.581707884Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: Started crio-conmon-0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927.scope. Feb 23 17:13:28 ip-10-0-136-68 systemd[1]: Started libcontainer container 0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927. Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.724769595Z" level=info msg="Created container 0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927: openshift-monitoring/alertmanager-main-1/alertmanager" id=ca788abb-de89-4c6f-bf25-3341774cc06e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.725333835Z" level=info msg="Starting container: 0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927" id=8d57e1cf-6b72-4a29-b752-dfa5f614d80f name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.747143673Z" level=info msg="Started container" PID=56702 containerID=0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927 description=openshift-monitoring/alertmanager-main-1/alertmanager id=8d57e1cf-6b72-4a29-b752-dfa5f614d80f name=/runtime.v1.RuntimeService/StartContainer sandboxID=663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.763248398Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d" id=538082ee-dcae-42aa-ad2a-e67367c59b53 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.763552606Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d not found" id=538082ee-dcae-42aa-ad2a-e67367c59b53 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.764418134Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d" id=0925a7f7-4fa8-43ac-a192-a5751b6e43bf name=/runtime.v1.ImageService/PullImage Feb 23 17:13:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:28.765890664Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d\"" Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.010986 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ingress-canary/ingress-canary-p47qk] Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.011150 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ingress-canary/ingress-canary-p47qk" podUID=a704838c-aeb5-4709-b91c-2460423203a4 containerName="serve-healthcheck-canary" containerID="cri-o://fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5" gracePeriod=30 Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:29.011703506Z" level=info msg="Stopping container: fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5 (timeout: 30s)" id=17a33bff-7a13-4a2b-8192-f2a3ad7e335a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:29 ip-10-0-136-68 conmon[4589]: conmon fe70a374bcae05fe8150 : container 4635 exited with status 2 Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: crio-conmon-fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5.scope: Succeeded. Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: crio-conmon-fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5.scope: Consumed 29ms CPU time Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: crio-fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5.scope: Succeeded. Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: crio-fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5.scope: Consumed 241ms CPU time Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:29.149684166Z" level=info msg="Stopped container fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5: openshift-ingress-canary/ingress-canary-p47qk/serve-healthcheck-canary" id=17a33bff-7a13-4a2b-8192-f2a3ad7e335a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:29.150075644Z" level=info msg="Stopping pod sandbox: 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4" id=e2fc4f7c-0023-4495-b915-b6f595651701 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:29.150264398Z" level=info msg="Got pod network &{Name:ingress-canary-p47qk Namespace:openshift-ingress-canary ID:4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4 UID:a704838c-aeb5-4709-b91c-2460423203a4 NetNS:/var/run/netns/7c821687-d150-4ab8-9614-66d1ccd9f281 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:29.150375571Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-p47qk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0f2d117b61e96751a16c6cd45f16f194395c7325c6296becf0acfbfd2c748d05-merged.mount: Succeeded. Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0f2d117b61e96751a16c6cd45f16f194395c7325c6296becf0acfbfd2c748d05-merged.mount: Consumed 0 CPU time Feb 23 17:13:29 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00315|bridge|INFO|bridge br-int: deleted interface 4072e0d3663d194 on port 9 Feb 23 17:13:29 ip-10-0-136-68 kernel: device 4072e0d3663d194 left promiscuous mode Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.429422 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-p47qk_a704838c-aeb5-4709-b91c-2460423203a4/serve-healthcheck-canary/1.log" Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.429463 2112 generic.go:296] "Generic (PLEG): container finished" podID=a704838c-aeb5-4709-b91c-2460423203a4 containerID="fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5" exitCode=2 Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.429507 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p47qk" event=&{ID:a704838c-aeb5-4709-b91c-2460423203a4 Type:ContainerDied Data:fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5} Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.430589 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerStarted Data:0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927} Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.430616 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerStarted Data:663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a} Feb 23 17:13:29 ip-10-0-136-68 conmon[56690]: conmon 0c5b66fe0d3f11c9b685 : container 56702 exited with status 1 Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: crio-0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927.scope: Succeeded. Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: crio-0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927.scope: Consumed 83ms CPU time Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: crio-conmon-0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927.scope: Succeeded. Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: crio-conmon-0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927.scope: Consumed 23ms CPU time Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:29Z [verbose] Del: openshift-ingress-canary:ingress-canary-p47qk:a704838c-aeb5-4709-b91c-2460423203a4:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: I0223 17:13:29.297375 56777 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a17f78aa116d88e479d1109f33cbc35adb39accb2e16c50549d4c66a867ee0a0-merged.mount: Succeeded. Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a17f78aa116d88e479d1109f33cbc35adb39accb2e16c50549d4c66a867ee0a0-merged.mount: Consumed 0 CPU time Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: run-utsns-7c821687\x2dd150\x2d4ab8\x2d9614\x2d66d1ccd9f281.mount: Succeeded. Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: run-utsns-7c821687\x2dd150\x2d4ab8\x2d9614\x2d66d1ccd9f281.mount: Consumed 0 CPU time Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: run-ipcns-7c821687\x2dd150\x2d4ab8\x2d9614\x2d66d1ccd9f281.mount: Succeeded. Feb 23 17:13:29 ip-10-0-136-68 systemd[1]: run-ipcns-7c821687\x2dd150\x2d4ab8\x2d9614\x2d66d1ccd9f281.mount: Consumed 0 CPU time Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:29.773716376Z" level=info msg="Stopped pod sandbox: 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4" id=e2fc4f7c-0023-4495-b915-b6f595651701 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.779157 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-p47qk_a704838c-aeb5-4709-b91c-2460423203a4/serve-healthcheck-canary/1.log" Feb 23 17:13:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:29.836704282Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d\"" Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.941969 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfmxf\" (UniqueName: \"kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf\") pod \"a704838c-aeb5-4709-b91c-2460423203a4\" (UID: \"a704838c-aeb5-4709-b91c-2460423203a4\") " Feb 23 17:13:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:29.955858 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf" (OuterVolumeSpecName: "kube-api-access-nfmxf") pod "a704838c-aeb5-4709-b91c-2460423203a4" (UID: "a704838c-aeb5-4709-b91c-2460423203a4"). InnerVolumeSpecName "kube-api-access-nfmxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.042484 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-nfmxf\" (UniqueName: \"kubernetes.io/projected/a704838c-aeb5-4709-b91c-2460423203a4-kube-api-access-nfmxf\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-poda704838c_aeb5_4709_b91c_2460423203a4.slice. Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: kubepods-burstable-poda704838c_aeb5_4709_b91c_2460423203a4.slice: Consumed 271ms CPU time Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.136538 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8] Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.136587 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:30.136653 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a704838c-aeb5-4709-b91c-2460423203a4" containerName="serve-healthcheck-canary" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.136696 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a704838c-aeb5-4709-b91c-2460423203a4" containerName="serve-healthcheck-canary" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.136753 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="a704838c-aeb5-4709-b91c-2460423203a4" containerName="serve-healthcheck-canary" Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podaadb02e0_de11_41e9_9dc0_106e1d0fc545.slice. Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: run-netns-7c821687\x2dd150\x2d4ab8\x2d9614\x2d66d1ccd9f281.mount: Succeeded. Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: run-netns-7c821687\x2dd150\x2d4ab8\x2d9614\x2d66d1ccd9f281.mount: Consumed 0 CPU time Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4-userdata-shm.mount: Succeeded. Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a704838c\x2daeb5\x2d4709\x2db91c\x2d2460423203a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnfmxf.mount: Succeeded. Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a704838c\x2daeb5\x2d4709\x2db91c\x2d2460423203a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnfmxf.mount: Consumed 0 CPU time Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.240211 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8] Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.244459 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.244506 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-tls\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.244538 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aadb02e0-de11-41e9-9dc0-106e1d0fc545-metrics-client-ca\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.244611 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7zg\" (UniqueName: \"kubernetes.io/projected/aadb02e0-de11-41e9-9dc0-106e1d0fc545-kube-api-access-tz7zg\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.345875 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-tls\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.345928 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.345961 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aadb02e0-de11-41e9-9dc0-106e1d0fc545-metrics-client-ca\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.345991 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7zg\" (UniqueName: \"kubernetes.io/projected/aadb02e0-de11-41e9-9dc0-106e1d0fc545-kube-api-access-tz7zg\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.346924 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aadb02e0-de11-41e9-9dc0-106e1d0fc545-metrics-client-ca\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.348857 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.348899 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-tls\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.436850 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager/0.log" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.436894 2112 generic.go:296] "Generic (PLEG): container finished" podID=cd707766-c226-46f5-b391-aa1689d95e81 containerID="0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927" exitCode=1 Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.436937 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerDied Data:0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927} Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.437815 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-p47qk_a704838c-aeb5-4709-b91c-2460423203a4/serve-healthcheck-canary/1.log" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.437855 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p47qk" event=&{ID:a704838c-aeb5-4709-b91c-2460423203a4 Type:ContainerDied Data:4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4} Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.437877 2112 scope.go:115] "RemoveContainer" containerID="fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5" Feb 23 17:13:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:30.439791269Z" level=info msg="Removing container: fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5" id=fa14a3e5-222e-402d-a490-a4e7bf444033 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:30.459305078Z" level=info msg="Removed container fe70a374bcae05fe8150c9d4a15c3801bb27ec29a6b1a6cbc3dff282255d4fd5: openshift-ingress-canary/ingress-canary-p47qk/serve-healthcheck-canary" id=fa14a3e5-222e-402d-a490-a4e7bf444033 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.528088 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/kube-state-metrics-8d585644b-dckcc] Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.528143 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.529225 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/kube-state-metrics-8d585644b-dckcc] Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod4961f202_10a7_460b_8e62_ce7b7dbb8806.slice. Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.649769 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t274d\" (UniqueName: \"kubernetes.io/projected/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-api-access-t274d\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.649875 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.649963 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4961f202-10a7-460b-8e62-ce7b7dbb8806-metrics-client-ca\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.650010 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4961f202-10a7-460b-8e62-ce7b7dbb8806-volume-directive-shadow\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.650068 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-tls\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.690938 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n] Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.690983 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.693223 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n] Feb 23 17:13:30 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod0a5a348d_9766_4727_93ec_147703d44b68.slice. Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.751240 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-t274d\" (UniqueName: \"kubernetes.io/projected/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-api-access-t274d\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.751281 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.751321 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4961f202-10a7-460b-8e62-ce7b7dbb8806-metrics-client-ca\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.751356 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4961f202-10a7-460b-8e62-ce7b7dbb8806-volume-directive-shadow\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.751387 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-tls\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.752283 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4961f202-10a7-460b-8e62-ce7b7dbb8806-volume-directive-shadow\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.752549 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4961f202-10a7-460b-8e62-ce7b7dbb8806-metrics-client-ca\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.753826 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.753982 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-tls\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.832420 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7zg\" (UniqueName: \"kubernetes.io/projected/aadb02e0-de11-41e9-9dc0-106e1d0fc545-kube-api-access-tz7zg\") pod \"openshift-state-metrics-7df79db5c7-2clx8\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.851807 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-client-tls\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.851851 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.852025 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-serving-certs-ca-bundle\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.852055 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrfsx\" (UniqueName: \"kubernetes.io/projected/0a5a348d-9766-4727-93ec-147703d44b68-kube-api-access-lrfsx\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.852081 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.852116 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.852220 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-metrics-client-ca\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.852347 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-t274d\" (UniqueName: \"kubernetes.io/projected/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-api-access-t274d\") pod \"kube-state-metrics-8d585644b-dckcc\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.910301 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ingress-canary/ingress-canary-p47qk] Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.952519 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-metrics-client-ca\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.952564 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-client-tls\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.952595 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.952639 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-serving-certs-ca-bundle\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.952687 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-lrfsx\" (UniqueName: \"kubernetes.io/projected/0a5a348d-9766-4727-93ec-147703d44b68-kube-api-access-lrfsx\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.952716 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.952748 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.953943 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-metrics-client-ca\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.953978 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-serving-certs-ca-bundle\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.954888 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.955523 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-client-tls\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.955572 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.956164 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.981063 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrfsx\" (UniqueName: \"kubernetes.io/projected/0a5a348d-9766-4727-93ec-147703d44b68-kube-api-access-lrfsx\") pod \"telemeter-client-5df7cd6cd7-cpr6n\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:30.993969 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-ingress-canary/ingress-canary-p47qk] Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.005459 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.005910783Z" level=info msg="Running pod sandbox: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/POD" id=92cde1a9-82ab-4742-a0f4-01f61e52c490 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.005967783Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.023722892Z" level=info msg="Got pod network &{Name:telemeter-client-5df7cd6cd7-cpr6n Namespace:openshift-monitoring ID:6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf UID:0a5a348d-9766-4727-93ec-147703d44b68 NetNS:/var/run/netns/404137e4-2392-4ff4-9680-48f7cffed564 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.023751858Z" level=info msg="Adding pod openshift-monitoring_telemeter-client-5df7cd6cd7-cpr6n to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.050784 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.051438918Z" level=info msg="Running pod sandbox: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/POD" id=f25d1c7e-1304-4b3c-abd7-b00a5df716c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.051494161Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.071391251Z" level=info msg="Got pod network &{Name:openshift-state-metrics-7df79db5c7-2clx8 Namespace:openshift-monitoring ID:e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8 UID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 NetNS:/var/run/netns/94ffd883-9806-4091-b7f1-c2e08049ae3b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.071418468Z" level=info msg="Adding pod openshift-monitoring_openshift-state-metrics-7df79db5c7-2clx8 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.141166 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.141802393Z" level=info msg="Running pod sandbox: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/POD" id=09b784f5-dafc-417c-bb38-9a6ca0436529 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.141866128Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.162363122Z" level=info msg="Got pod network &{Name:kube-state-metrics-8d585644b-dckcc Namespace:openshift-monitoring ID:a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c UID:4961f202-10a7-460b-8e62-ce7b7dbb8806 NetNS:/var/run/netns/f629175d-e7a3-4679-aa09-96e74c78cb04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.162400118Z" level=info msg="Adding pod openshift-monitoring_kube-state-metrics-8d585644b-dckcc to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:31 ip-10-0-136-68 systemd-udevd[56912]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:13:31 ip-10-0-136-68 systemd-udevd[56912]: Could not generate persistent MAC address for 6cf6d0bcdfd8248: No such file or directory Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.1956] manager: (6cf6d0bcdfd8248): new Veth device (/org/freedesktop/NetworkManager/Devices/56) Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.1963] device (6cf6d0bcdfd8248): carrier: link connected Feb 23 17:13:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 6cf6d0bcdfd8248: link is not ready Feb 23 17:13:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:13:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:13:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 6cf6d0bcdfd8248: link becomes ready Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.229351 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-ingress-canary/ingress-canary-pjjrk] Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.229401 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00316|bridge|INFO|bridge br-int: added interface 6cf6d0bcdfd8248 on port 23 Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.2350] manager: (6cf6d0bcdfd8248): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57) Feb 23 17:13:31 ip-10-0-136-68 kernel: device 6cf6d0bcdfd8248 entered promiscuous mode Feb 23 17:13:31 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pode0abac93_3e79_4a32_8375_5ef1a2e59687.slice. Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.268029 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77mc\" (UniqueName: \"kubernetes.io/projected/e0abac93-3e79-4a32-8375-5ef1a2e59687-kube-api-access-t77mc\") pod \"ingress-canary-pjjrk\" (UID: \"e0abac93-3e79-4a32-8375-5ef1a2e59687\") " pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.2746] manager: (e14bdaaa341fca1): new Veth device (/org/freedesktop/NetworkManager/Devices/58) Feb 23 17:13:31 ip-10-0-136-68 systemd-udevd[56927]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.2759] device (e14bdaaa341fca1): carrier: link connected Feb 23 17:13:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): e14bdaaa341fca1: link is not ready Feb 23 17:13:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): e14bdaaa341fca1: link becomes ready Feb 23 17:13:31 ip-10-0-136-68 systemd-udevd[56927]: Could not generate persistent MAC address for e14bdaaa341fca1: No such file or directory Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.300035 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-ingress-canary/ingress-canary-pjjrk] Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.3134] manager: (e14bdaaa341fca1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59) Feb 23 17:13:31 ip-10-0-136-68 kernel: device e14bdaaa341fca1 entered promiscuous mode Feb 23 17:13:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00317|bridge|INFO|bridge br-int: added interface e14bdaaa341fca1 on port 24 Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.369108 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-t77mc\" (UniqueName: \"kubernetes.io/projected/e0abac93-3e79-4a32-8375-5ef1a2e59687-kube-api-access-t77mc\") pod \"ingress-canary-pjjrk\" (UID: \"e0abac93-3e79-4a32-8375-5ef1a2e59687\") " pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: I0223 17:13:31.181723 56877 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:31Z [verbose] Add: openshift-monitoring:telemeter-client-5df7cd6cd7-cpr6n:0a5a348d-9766-4727-93ec-147703d44b68:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6cf6d0bcdfd8248","mac":"c2:6a:da:e8:5f:59"},{"name":"eth0","mac":"0a:58:0a:81:02:1b","sandbox":"/var/run/netns/404137e4-2392-4ff4-9680-48f7cffed564"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.27/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: I0223 17:13:31.324080 56869 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"telemeter-client-5df7cd6cd7-cpr6n", UID:"0a5a348d-9766-4727-93ec-147703d44b68", APIVersion:"v1", ResourceVersion:"68009", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.27/23] from ovn-kubernetes Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.379148785Z" level=info msg="Got pod network &{Name:telemeter-client-5df7cd6cd7-cpr6n Namespace:openshift-monitoring ID:6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf UID:0a5a348d-9766-4727-93ec-147703d44b68 NetNS:/var/run/netns/404137e4-2392-4ff4-9680-48f7cffed564 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.379294223Z" level=info msg="Checking pod openshift-monitoring_telemeter-client-5df7cd6cd7-cpr6n for CNI network multus-cni-network (type=multus)" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.387772138Z" level=info msg="Ran pod sandbox 6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf with infra container: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/POD" id=92cde1a9-82ab-4742-a0f4-01f61e52c490 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.390270196Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a17e5d2a66467075003c942b645a31d4ed5d221bf60325328713b2784e65403" id=b8c12c63-981a-40aa-b810-36dcd3926b65 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.390615037Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a17e5d2a66467075003c942b645a31d4ed5d221bf60325328713b2784e65403 not found" id=b8c12c63-981a-40aa-b810-36dcd3926b65 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.391298108Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a17e5d2a66467075003c942b645a31d4ed5d221bf60325328713b2784e65403" id=16742845-646c-4699-adf1-314e33c8f973 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.392243711Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a17e5d2a66467075003c942b645a31d4ed5d221bf60325328713b2784e65403\"" Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.443628 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n] Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.444251 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" event=&{ID:0a5a348d-9766-4727-93ec-147703d44b68 Type:ContainerStarted Data:6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf} Feb 23 17:13:31 ip-10-0-136-68 systemd-udevd[56957]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:13:31 ip-10-0-136-68 systemd-udevd[56957]: Could not generate persistent MAC address for a935a237826dd4a: No such file or directory Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.5031] manager: (a935a237826dd4a): new Veth device (/org/freedesktop/NetworkManager/Devices/60) Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.5037] device (a935a237826dd4a): carrier: link connected Feb 23 17:13:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): a935a237826dd4a: link is not ready Feb 23 17:13:31 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): a935a237826dd4a: link becomes ready Feb 23 17:13:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00318|bridge|INFO|bridge br-int: added interface a935a237826dd4a on port 25 Feb 23 17:13:31 ip-10-0-136-68 NetworkManager[1147]: [1677172411.5297] manager: (a935a237826dd4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61) Feb 23 17:13:31 ip-10-0-136-68 kernel: device a935a237826dd4a entered promiscuous mode Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: I0223 17:13:31.242621 56892 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:31Z [verbose] Add: openshift-monitoring:openshift-state-metrics-7df79db5c7-2clx8:aadb02e0-de11-41e9-9dc0-106e1d0fc545:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e14bdaaa341fca1","mac":"8a:d7:c4:92:74:9f"},{"name":"eth0","mac":"0a:58:0a:81:02:19","sandbox":"/var/run/netns/94ffd883-9806-4091-b7f1-c2e08049ae3b"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.25/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: I0223 17:13:31.383706 56884 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"openshift-state-metrics-7df79db5c7-2clx8", UID:"aadb02e0-de11-41e9-9dc0-106e1d0fc545", APIVersion:"v1", ResourceVersion:"67973", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.25/23] from ovn-kubernetes Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.628735730Z" level=info msg="Got pod network &{Name:openshift-state-metrics-7df79db5c7-2clx8 Namespace:openshift-monitoring ID:e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8 UID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 NetNS:/var/run/netns/94ffd883-9806-4091-b7f1-c2e08049ae3b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.628906931Z" level=info msg="Checking pod openshift-monitoring_openshift-state-metrics-7df79db5c7-2clx8 for CNI network multus-cni-network (type=multus)" Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:31.631724 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaadb02e0_de11_41e9_9dc0_106e1d0fc545.slice/crio-e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8.scope WatchSource:0}: Error finding container e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8: Status 404 returned error can't find the container with id e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8 Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.634785783Z" level=info msg="Ran pod sandbox e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8 with infra container: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/POD" id=f25d1c7e-1304-4b3c-abd7-b00a5df716c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.635521127Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=00fcede2-13ef-42a3-bc64-4ae1facb6e16 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.635789308Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0 not found" id=00fcede2-13ef-42a3-bc64-4ae1facb6e16 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.636258573Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=ef7d4e38-318e-47e5-ac0d-3d3d0159814d name=/runtime.v1.ImageService/PullImage Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.637137293Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0\"" Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.673713 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8] Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: I0223 17:13:31.478949 56944 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:31Z [verbose] Add: openshift-monitoring:kube-state-metrics-8d585644b-dckcc:4961f202-10a7-460b-8e62-ce7b7dbb8806:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a935a237826dd4a","mac":"92:1b:bf:0b:63:56"},{"name":"eth0","mac":"0a:58:0a:81:02:1a","sandbox":"/var/run/netns/f629175d-e7a3-4679-aa09-96e74c78cb04"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.26/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: I0223 17:13:31.575086 56904 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"kube-state-metrics-8d585644b-dckcc", UID:"4961f202-10a7-460b-8e62-ce7b7dbb8806", APIVersion:"v1", ResourceVersion:"68030", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.26/23] from ovn-kubernetes Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.748637584Z" level=info msg="Got pod network &{Name:kube-state-metrics-8d585644b-dckcc Namespace:openshift-monitoring ID:a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c UID:4961f202-10a7-460b-8e62-ce7b7dbb8806 NetNS:/var/run/netns/f629175d-e7a3-4679-aa09-96e74c78cb04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.748804725Z" level=info msg="Checking pod openshift-monitoring_kube-state-metrics-8d585644b-dckcc for CNI network multus-cni-network (type=multus)" Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.749996 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77mc\" (UniqueName: \"kubernetes.io/projected/e0abac93-3e79-4a32-8375-5ef1a2e59687-kube-api-access-t77mc\") pod \"ingress-canary-pjjrk\" (UID: \"e0abac93-3e79-4a32-8375-5ef1a2e59687\") " pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:31.751761 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4961f202_10a7_460b_8e62_ce7b7dbb8806.slice/crio-a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c.scope WatchSource:0}: Error finding container a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c: Status 404 returned error can't find the container with id a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.753223512Z" level=info msg="Ran pod sandbox a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c with infra container: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/POD" id=09b784f5-dafc-417c-bb38-9a6ca0436529 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.754023128Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:fd66defd59ad9ac6c45d01053333ed7970603ceae5c0e9fb53017f80861f2a8c" id=4e29a716-8b52-45b2-ad39-90a1e5be471b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.754223906Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:fd66defd59ad9ac6c45d01053333ed7970603ceae5c0e9fb53017f80861f2a8c not found" id=4e29a716-8b52-45b2-ad39-90a1e5be471b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.755003245Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:fd66defd59ad9ac6c45d01053333ed7970603ceae5c0e9fb53017f80861f2a8c" id=54c47139-e722-4633-b7d1-16f8b200e25a name=/runtime.v1.ImageService/PullImage Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.755969285Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:fd66defd59ad9ac6c45d01053333ed7970603ceae5c0e9fb53017f80861f2a8c\"" Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.801871 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/kube-state-metrics-8d585644b-dckcc] Feb 23 17:13:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:31.879181 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.879615494Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=da2c18af-20b2-4623-8ef2-aa87335bddf8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.879688814Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.898396427Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:31.898422243Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:32.120424 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a704838c-aeb5-4709-b91c-2460423203a4 path="/var/lib/kubelet/pods/a704838c-aeb5-4709-b91c-2460423203a4/volumes" Feb 23 17:13:32 ip-10-0-136-68 systemd-udevd[56999]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:13:32 ip-10-0-136-68 NetworkManager[1147]: [1677172412.1235] manager: (13a3543931af50f): new Veth device (/org/freedesktop/NetworkManager/Devices/62) Feb 23 17:13:32 ip-10-0-136-68 systemd-udevd[56999]: Could not generate persistent MAC address for 13a3543931af50f: No such file or directory Feb 23 17:13:32 ip-10-0-136-68 NetworkManager[1147]: [1677172412.1245] device (13a3543931af50f): carrier: link connected Feb 23 17:13:32 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 13a3543931af50f: link is not ready Feb 23 17:13:32 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 13a3543931af50f: link becomes ready Feb 23 17:13:32 ip-10-0-136-68 NetworkManager[1147]: [1677172412.1518] manager: (13a3543931af50f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63) Feb 23 17:13:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00319|bridge|INFO|bridge br-int: added interface 13a3543931af50f on port 26 Feb 23 17:13:32 ip-10-0-136-68 kernel: device 13a3543931af50f entered promiscuous mode Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: I0223 17:13:32.110270 56982 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:32Z [verbose] Add: openshift-ingress-canary:ingress-canary-pjjrk:e0abac93-3e79-4a32-8375-5ef1a2e59687:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"13a3543931af50f","mac":"42:96:95:8f:a0:65"},{"name":"eth0","mac":"0a:58:0a:81:02:1c","sandbox":"/var/run/netns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.28/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: I0223 17:13:32.204980 56975 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-pjjrk", UID:"e0abac93-3e79-4a32-8375-5ef1a2e59687", APIVersion:"v1", ResourceVersion:"68040", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.28/23] from ovn-kubernetes Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.227550203Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.227690287Z" level=info msg="Checking pod openshift-ingress-canary_ingress-canary-pjjrk for CNI network multus-cni-network (type=multus)" Feb 23 17:13:32 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:32.230956 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0abac93_3e79_4a32_8375_5ef1a2e59687.slice/crio-13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d.scope WatchSource:0}: Error finding container 13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d: Status 404 returned error can't find the container with id 13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.233730445Z" level=info msg="Ran pod sandbox 13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d with infra container: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=da2c18af-20b2-4623-8ef2-aa87335bddf8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.234483240Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:aa34e2c327cf462e6a0e3661f0e0a1e5f8497643901c2df5b7793c00fe6df072" id=9092d3ec-2595-4d80-8156-476affc9c3b5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.234725548Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:aa34e2c327cf462e6a0e3661f0e0a1e5f8497643901c2df5b7793c00fe6df072 not found" id=9092d3ec-2595-4d80-8156-476affc9c3b5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.235229645Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:aa34e2c327cf462e6a0e3661f0e0a1e5f8497643901c2df5b7793c00fe6df072" id=a4806cb4-633f-4ba1-9086-4076f379a8e2 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.236046423Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:aa34e2c327cf462e6a0e3661f0e0a1e5f8497643901c2df5b7793c00fe6df072\"" Feb 23 17:13:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:32.270882 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-ingress-canary/ingress-canary-pjjrk] Feb 23 17:13:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:32.447408 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" event=&{ID:4961f202-10a7-460b-8e62-ce7b7dbb8806 Type:ContainerStarted Data:a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c} Feb 23 17:13:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:32.448027 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" event=&{ID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 Type:ContainerStarted Data:e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8} Feb 23 17:13:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:32.448542 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pjjrk" event=&{ID:e0abac93-3e79-4a32-8375-5ef1a2e59687 Type:ContainerStarted Data:13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d} Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.899105081Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0\"" Feb 23 17:13:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:32.956000063Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:fd66defd59ad9ac6c45d01053333ed7970603ceae5c0e9fb53017f80861f2a8c\"" Feb 23 17:13:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:33.299245482Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a17e5d2a66467075003c942b645a31d4ed5d221bf60325328713b2784e65403\"" Feb 23 17:13:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:33.871173933Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:aa34e2c327cf462e6a0e3661f0e0a1e5f8497643901c2df5b7793c00fe6df072\"" Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.148019504Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d" id=0925a7f7-4fa8-43ac-a192-a5751b6e43bf name=/runtime.v1.ImageService/PullImage Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.149336374Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d" id=f629ac8f-c7f7-46d5-a546-89a015cb7ea4 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.150806717Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51eee535b46f8fa059a614084a60e25b9d7f27cc61dacbc265696e915f022f0f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d],Size_:359941570,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f629ac8f-c7f7-46d5-a546-89a015cb7ea4 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.154950575Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/config-reloader" id=a7aa1a8f-df6b-490b-ae65-6aea35716ba0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.155043935Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac.scope. Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started libcontainer container ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac. Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.308571896Z" level=info msg="Created container ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac: openshift-monitoring/alertmanager-main-1/config-reloader" id=a7aa1a8f-df6b-490b-ae65-6aea35716ba0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.308976975Z" level=info msg="Starting container: ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac" id=50234809-efc3-4843-b92a-47694c0f690e name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.315915598Z" level=info msg="Started container" PID=57129 containerID=ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac description=openshift-monitoring/alertmanager-main-1/config-reloader id=50234809-efc3-4843-b92a-47694c0f690e name=/runtime.v1.RuntimeService/StartContainer sandboxID=663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.336748444Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=4aacfa0b-c5bc-441f-b82f-7e9d00203e21 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.336923459Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4aacfa0b-c5bc-441f-b82f-7e9d00203e21 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.337509290Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495" id=fe2bd95a-7520-403a-a967-1e8eabf17f13 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.337652144Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d62409a7cb62709d4c960d445d571fc81522092e7114db20e1ad84bf57c96f31,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce1de860b639d4831bb74b0271aed6882b45febb37f60bcf8a31157b87ce0495],Size_:353087996,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=fe2bd95a-7520-403a-a967-1e8eabf17f13 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.338459790Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=9069923a-e274-4b6d-8ed3-b1019b671044 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.338548113Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d.scope. Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started libcontainer container 11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d. Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.449792634Z" level=info msg="Created container 11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=9069923a-e274-4b6d-8ed3-b1019b671044 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.450198921Z" level=info msg="Starting container: 11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d" id=5d02d2a9-36f4-4029-8614-63b04df79c37 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.457489804Z" level=info msg="Started container" PID=57171 containerID=11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d description=openshift-monitoring/alertmanager-main-1/alertmanager-proxy id=5d02d2a9-36f4-4029-8614-63b04df79c37 name=/runtime.v1.RuntimeService/StartContainer sandboxID=663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a Feb 23 17:13:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:36.457595 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager/0.log" Feb 23 17:13:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:36.457640 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerStarted Data:ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac} Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.466352583Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=292c1085-03b4-4437-9968-9f82a6115e40 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.466532719Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=292c1085-03b4-4437-9968-9f82a6115e40 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.467173174Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=971db3b7-876d-4de8-8cbf-d0e006bc9e09 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.467333026Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=971db3b7-876d-4de8-8cbf-d0e006bc9e09 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.468483072Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=39381041-5f00-4b39-9224-b577bbc984be name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.468589667Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c.scope. Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started libcontainer container 3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c. Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.566788252Z" level=info msg="Created container 3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=39381041-5f00-4b39-9224-b577bbc984be name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.567189286Z" level=info msg="Starting container: 3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c" id=b60db8da-6012-4461-b74a-0924fff8a2f5 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.573923953Z" level=info msg="Started container" PID=57217 containerID=3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c description=openshift-monitoring/alertmanager-main-1/kube-rbac-proxy id=b60db8da-6012-4461-b74a-0924fff8a2f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.582752339Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=f220c855-dada-42f2-b611-fa3088a7ae2d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.582939019Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f220c855-dada-42f2-b611-fa3088a7ae2d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.584444279Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3" id=f19aa24d-77ea-4d34-9d32-ac7e15c33082 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.584787950Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e3b3b2a0d627c23a0ad1b13c84e449cbe42c4902ae052c41a744ff75773fe364,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:828be510029fdb379732cb7798a70d5433cb4921f3a99d73342c2d452d3b40c3],Size_:406243263,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f19aa24d-77ea-4d34-9d32-ac7e15c33082 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.585505450Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=4db991fd-7164-47b7-9527-7a1da297b653 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.585593539Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299.scope. Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started libcontainer container eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299. Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.682934393Z" level=info msg="Created container eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=4db991fd-7164-47b7-9527-7a1da297b653 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.683304956Z" level=info msg="Starting container: eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299" id=4e9c9aca-0693-4d42-b432-82f7dd6eb188 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.690204405Z" level=info msg="Started container" PID=57264 containerID=eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299 description=openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric id=4e9c9aca-0693-4d42-b432-82f7dd6eb188 name=/runtime.v1.RuntimeService/StartContainer sandboxID=663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.703139792Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=fd7a8594-4129-4885-b3dd-430f80d8a53a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.703316603Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=fd7a8594-4129-4885-b3dd-430f80d8a53a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.704138457Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed" id=1ce6918c-d6ff-4ad6-8d2e-8ecc1fb67980 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.704311000Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4b5544f2b1fb54d82b04ad030305d937195d3556ba12e42d312ef4784079861b,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ac41d1f9f68b478647df3bed79ae2cd3ca1e8fb93e31de0fb82c2c9e6ff6fed],Size_:325560759,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=1ce6918c-d6ff-4ad6-8d2e-8ecc1fb67980 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.705082361Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=c4c09585-2bd5-4784-bc17-fa553eb386dd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.705172403Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd.scope. Feb 23 17:13:36 ip-10-0-136-68 systemd[1]: Started libcontainer container 1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd. Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.815185008Z" level=info msg="Created container 1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=c4c09585-2bd5-4784-bc17-fa553eb386dd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.815576700Z" level=info msg="Starting container: 1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd" id=e87b1c2b-f648-40f2-8c39-26d237e97d6a name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:36.823262025Z" level=info msg="Started container" PID=57310 containerID=1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd description=openshift-monitoring/alertmanager-main-1/prom-label-proxy id=e87b1c2b-f648-40f2-8c39-26d237e97d6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: run-runc-11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d-runc.NNX7Yi.mount: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.796941 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager/0.log" Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.797009 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerStarted Data:11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d} Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.797035 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerStarted Data:1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd} Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.797051 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerStarted Data:eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299} Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.797066 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerStarted Data:3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c} Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.799627 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=cd707766-c226-46f5-b391-aa1689d95e81 containerName="config-reloader" containerID="cri-o://ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac" gracePeriod=120 Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.799717 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=cd707766-c226-46f5-b391-aa1689d95e81 containerName="prom-label-proxy" containerID="cri-o://1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd" gracePeriod=120 Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.799848 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=cd707766-c226-46f5-b391-aa1689d95e81 containerName="kube-rbac-proxy" containerID="cri-o://3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c" gracePeriod=120 Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.799886 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=cd707766-c226-46f5-b391-aa1689d95e81 containerName="alertmanager-proxy" containerID="cri-o://11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d" gracePeriod=120 Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:37.799904 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=cd707766-c226-46f5-b391-aa1689d95e81 containerName="kube-rbac-proxy-metric" containerID="cri-o://eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299" gracePeriod=120 Feb 23 17:13:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:37.835495250Z" level=info msg="Stopping container: 11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d (timeout: 120s)" id=20b7fc9c-a8f5-4e6c-b6b7-38c7bce09f9d name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:37 ip-10-0-136-68 conmon[57158]: conmon 11a751dd154fd575cb11 : container 57171 exited with status 2 Feb 23 17:13:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:37.863612208Z" level=info msg="Stopping container: 1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd (timeout: 120s)" id=ee3bb649-d17d-4ce0-af3d-9a026fe341e8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:37.866307823Z" level=info msg="Stopping container: ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac (timeout: 120s)" id=16317d0b-073c-4ac3-b384-694126be72b6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d.scope: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d.scope: Consumed 64ms CPU time Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-conmon-11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d.scope: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-conmon-11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d.scope: Consumed 25ms CPU time Feb 23 17:13:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:37.878686489Z" level=info msg="Stopping container: 3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c (timeout: 120s)" id=3eb7c153-ab7a-499a-9b0a-da8df337b7b7 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:37.898731848Z" level=info msg="Stopping container: eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299 (timeout: 120s)" id=91f6ffd3-963e-4da8-bc17-75e429a05de7 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:37 ip-10-0-136-68 conmon[57116]: conmon ad26c6eaf15076a267fe : container 57129 exited with status 2 Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac.scope: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac.scope: Consumed 24ms CPU time Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-conmon-ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac.scope: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-conmon-ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac.scope: Consumed 25ms CPU time Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd.scope: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd.scope: Consumed 27ms CPU time Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c.scope: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c.scope: Consumed 53ms CPU time Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-conmon-1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd.scope: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-conmon-1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd.scope: Consumed 23ms CPU time Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-conmon-3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c.scope: Succeeded. Feb 23 17:13:37 ip-10-0-136-68 systemd[1]: crio-conmon-3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c.scope: Consumed 24ms CPU time Feb 23 17:13:37 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:37.955865 2112 manager.go:698] Error getting data for container /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd707766_c226_46f5_b391_aa1689d95e81.slice/crio-conmon-ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac.scope because of race condition Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: crio-eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299.scope: Succeeded. Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: crio-eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299.scope: Consumed 54ms CPU time Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: crio-conmon-eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299.scope: Succeeded. Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: crio-conmon-eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299.scope: Consumed 24ms CPU time Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6054c492313189975c7aca252b7a5e3d40b7080a9b33a72daf9c6d1527f957c5-merged.mount: Succeeded. Feb 23 17:13:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:38.790144427Z" level=info msg="Stopped container 3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=3eb7c153-ab7a-499a-9b0a-da8df337b7b7 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-417cd138597f968178f80dc989104ed4052db1e9971d6ad8c72686e13679021c-merged.mount: Succeeded. Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.800397 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager-proxy/0.log" Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.800732 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/config-reloader/0.log" Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801035 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager/0.log" Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801078 2112 generic.go:296] "Generic (PLEG): container finished" podID=cd707766-c226-46f5-b391-aa1689d95e81 containerID="1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd" exitCode=0 Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801094 2112 generic.go:296] "Generic (PLEG): container finished" podID=cd707766-c226-46f5-b391-aa1689d95e81 containerID="eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299" exitCode=0 Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801106 2112 generic.go:296] "Generic (PLEG): container finished" podID=cd707766-c226-46f5-b391-aa1689d95e81 containerID="3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c" exitCode=0 Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801122 2112 generic.go:296] "Generic (PLEG): container finished" podID=cd707766-c226-46f5-b391-aa1689d95e81 containerID="11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d" exitCode=2 Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801138 2112 generic.go:296] "Generic (PLEG): container finished" podID=cd707766-c226-46f5-b391-aa1689d95e81 containerID="ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac" exitCode=2 Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801155 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerDied Data:1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd} Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801184 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerDied Data:eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299} Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801198 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerDied Data:3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c} Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801214 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerDied Data:11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d} Feb 23 17:13:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:38.801227 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerDied Data:ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac} Feb 23 17:13:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:38.802224125Z" level=info msg="Stopped container 1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=ee3bb649-d17d-4ce0-af3d-9a026fe341e8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1df4459d75ba2b824e99b51f4a81e13ffd8f37e09e2457a16574091ca5c8f3fa-merged.mount: Succeeded. Feb 23 17:13:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:38.814771904Z" level=info msg="Stopped container eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=91f6ffd3-963e-4da8-bc17-75e429a05de7 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-65f175a966ba09416ebf18048877f1e50bdbd88e4b4f11c7ad17c8be9b6d61cf-merged.mount: Succeeded. Feb 23 17:13:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:38.826968664Z" level=info msg="Stopped container ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac: openshift-monitoring/alertmanager-main-1/config-reloader" id=16317d0b-073c-4ac3-b384-694126be72b6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f464cf26b82cf166778b983685be855abae417cc070ea38adf16fb6173366dcb-merged.mount: Succeeded. Feb 23 17:13:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:38.847708474Z" level=info msg="Stopped container 11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=20b7fc9c-a8f5-4e6c-b6b7-38c7bce09f9d name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:38.848001529Z" level=info msg="Stopping pod sandbox: 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a" id=35e36653-971f-41dd-ad51-c36c3761eee7 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:38.848186805Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a UID:cd707766-c226-46f5-b391-aa1689d95e81 NetNS:/var/run/netns/40b38e0a-d519-47fc-a9db-2e617e9024f3 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:38.848285470Z" level=info msg="Deleting pod openshift-monitoring_alertmanager-main-1 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00320|bridge|INFO|bridge br-int: deleted interface 663ed760da6d8b1 on port 22 Feb 23 17:13:39 ip-10-0-136-68 kernel: device 663ed760da6d8b1 left promiscuous mode Feb 23 17:13:39 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:38Z [verbose] Del: openshift-monitoring:alertmanager-main-1:cd707766-c226-46f5-b391-aa1689d95e81:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:13:39 ip-10-0-136-68 crio[2062]: I0223 17:13:38.995982 57517 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:39.747120179Z" level=warning msg="Found defunct process with PID 56749 (haproxy)" Feb 23 17:13:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:39.747186053Z" level=warning msg="Found defunct process with PID 56806 (haproxy)" Feb 23 17:13:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:39.747227871Z" level=warning msg="Found defunct process with PID 57033 (haproxy)" Feb 23 17:13:39 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4f4a49aaae19759f2afaf77a54db3fdb55bfdaa6c23d8d3f501289beb1f8ecc3-merged.mount: Succeeded. Feb 23 17:13:39 ip-10-0-136-68 systemd[1]: run-utsns-40b38e0a\x2dd519\x2d47fc\x2da9db\x2d2e617e9024f3.mount: Succeeded. Feb 23 17:13:39 ip-10-0-136-68 systemd[1]: run-ipcns-40b38e0a\x2dd519\x2d47fc\x2da9db\x2d2e617e9024f3.mount: Succeeded. Feb 23 17:13:39 ip-10-0-136-68 systemd[1]: run-netns-40b38e0a\x2dd519\x2d47fc\x2da9db\x2d2e617e9024f3.mount: Succeeded. Feb 23 17:13:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:39.939807182Z" level=info msg="Stopped pod sandbox: 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a" id=35e36653-971f-41dd-ad51-c36c3761eee7 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:39 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:39.947821 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager-proxy/0.log" Feb 23 17:13:39 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:39.948306 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/config-reloader/0.log" Feb 23 17:13:39 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:39.948620 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager/0.log" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139597 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-web-config\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139645 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-main-db\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139699 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-config-volume\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139733 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-tls-assets\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139783 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139811 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfdgs\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-kube-api-access-kfdgs\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139855 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-tls\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139887 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy-metric\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139943 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-proxy\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.139973 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-trusted-ca-bundle\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.140018 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-metrics-client-ca\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.140050 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-config-out\") pod \"cd707766-c226-46f5-b391-aa1689d95e81\" (UID: \"cd707766-c226-46f5-b391-aa1689d95e81\") " Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:40.140229 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cd707766-c226-46f5-b391-aa1689d95e81/volumes/kubernetes.io~empty-dir/alertmanager-main-db: clearQuota called, but quotas disabled Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.140336 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:40.141946 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cd707766-c226-46f5-b391-aa1689d95e81/volumes/kubernetes.io~configmap/alertmanager-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.142124 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:40.142200 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cd707766-c226-46f5-b391-aa1689d95e81/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.142317 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.152740 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-proxy" (OuterVolumeSpecName: "secret-alertmanager-main-proxy") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "secret-alertmanager-main-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.156269 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.156340 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.156379 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.156419 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-config-out" (OuterVolumeSpecName: "config-out") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.156435 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.158522 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.158615 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-kube-api-access-kfdgs" (OuterVolumeSpecName: "kube-api-access-kfdgs") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "kube-api-access-kfdgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.169163 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-web-config" (OuterVolumeSpecName: "web-config") pod "cd707766-c226-46f5-b391-aa1689d95e81" (UID: "cd707766-c226-46f5-b391-aa1689d95e81"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242704 2112 reconciler.go:399] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-web-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242744 2112 reconciler.go:399] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-main-db\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242763 2112 reconciler.go:399] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-config-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242779 2112 reconciler.go:399] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-tls-assets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242797 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242813 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-kfdgs\" (UniqueName: \"kubernetes.io/projected/cd707766-c226-46f5-b391-aa1689d95e81-kube-api-access-kfdgs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242829 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242846 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242862 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/cd707766-c226-46f5-b391-aa1689d95e81-secret-alertmanager-main-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242877 2112 reconciler.go:399] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242892 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd707766-c226-46f5-b391-aa1689d95e81-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.242907 2112 reconciler.go:399] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd707766-c226-46f5-b391-aa1689d95e81-config-out\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volume\x2dsubpaths-web\x2dconfig-alertmanager-9.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a-userdata-shm.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkfdgs.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dtls.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7esecret-config\x2dvolume.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy\x2dmetric.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dproxy.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-cd707766\x2dc226\x2d46f5\x2db391\x2daa1689d95e81-volumes-kubernetes.io\x7eempty\x2ddir-config\x2dout.mount: Succeeded. Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.809245 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager-proxy/0.log" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.809549 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/config-reloader/0.log" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.809896 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_cd707766-c226-46f5-b391-aa1689d95e81/alertmanager/0.log" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.809955 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:cd707766-c226-46f5-b391-aa1689d95e81 Type:ContainerDied Data:663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a} Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.809987 2112 scope.go:115] "RemoveContainer" containerID="1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd" Feb 23 17:13:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:40.818742257Z" level=info msg="Removing container: 1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd" id=d4547af0-1e8f-442c-bb03-83835de660ed name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podcd707766_c226_46f5_b391_aa1689d95e81.slice. Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: kubepods-burstable-podcd707766_c226_46f5_b391_aa1689d95e81.slice: Consumed 453ms CPU time Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.837622 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.844528 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876026 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876076 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:40.876153 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="prom-label-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876165 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="prom-label-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:40.876177 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="kube-rbac-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876185 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="kube-rbac-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:40.876195 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="kube-rbac-proxy-metric" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876203 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="kube-rbac-proxy-metric" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:40.876215 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="alertmanager" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876222 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="alertmanager" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:40.876234 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="alertmanager-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876242 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="alertmanager-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:40.876252 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="config-reloader" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876260 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="config-reloader" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876316 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="config-reloader" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876326 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="kube-rbac-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876334 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="alertmanager-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876364 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="prom-label-proxy" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876374 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="alertmanager" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.876383 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd707766-c226-46f5-b391-aa1689d95e81" containerName="kube-rbac-proxy-metric" Feb 23 17:13:40 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod39a23baf_fee4_4b3a_839f_6c0452a117b2.slice. Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.894915 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.947896 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-tls-assets\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.947932 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.947960 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlv6w\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-kube-api-access-mlv6w\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948060 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-volume\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948103 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948140 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948172 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-web-config\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948210 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948227 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948248 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948270 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-out\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:40.948288 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049408 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049444 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049474 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049506 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-out\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049533 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049563 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049590 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-tls-assets\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049619 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-mlv6w\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-kube-api-access-mlv6w\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.049990 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.050140 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-volume\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.050194 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.050225 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.050250 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-web-config\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.051214 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.052336 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.053012 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-web-config\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.053943 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-proxy\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.069157 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.076846 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-out\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.076948 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.077053 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-tls-assets\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.077218 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-volume\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.079738 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlv6w\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-kube-api-access-mlv6w\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.079841 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.121861975Z" level=info msg="Removed container 1520f25b69f54c3bc786cde8d201eaa21e667c0b0abf6369eb8a0a9b308356fd: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=d4547af0-1e8f-442c-bb03-83835de660ed name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.122070 2112 scope.go:115] "RemoveContainer" containerID="eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.122772386Z" level=info msg="Removing container: eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299" id=0f73f611-bb0c-4b11-a12e-6272948e4587 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.134478516Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a17e5d2a66467075003c942b645a31d4ed5d221bf60325328713b2784e65403" id=16742845-646c-4699-adf1-314e33c8f973 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.135071564Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a17e5d2a66467075003c942b645a31d4ed5d221bf60325328713b2784e65403" id=f4a5cc04-81ee-42f8-8e50-562e7f91269c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.145553508Z" level=info msg="Removed container eef58967a001261e664f42e3b77d44745830fcdaeed59a49b76c5d71402aa299: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=0f73f611-bb0c-4b11-a12e-6272948e4587 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.145792 2112 scope.go:115] "RemoveContainer" containerID="3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.146470949Z" level=info msg="Removing container: 3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c" id=1607971f-09c9-4d37-a917-719ec2699b5c name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.152756912Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:aa34e2c327cf462e6a0e3661f0e0a1e5f8497643901c2df5b7793c00fe6df072" id=a4806cb4-633f-4ba1-9086-4076f379a8e2 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.152764247Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=ef7d4e38-318e-47e5-ac0d-3d3d0159814d name=/runtime.v1.ImageService/PullImage Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.153558068Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=0c4205fa-df38-4ccb-97ce-e0e25d4e6386 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.154055484Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b14636daf028522c8188d41b2a2eeb17aaffb2b7474c9489834695f7c6b558e7,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a17e5d2a66467075003c942b645a31d4ed5d221bf60325328713b2784e65403],Size_:396847525,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f4a5cc04-81ee-42f8-8e50-562e7f91269c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.154157244Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:aa34e2c327cf462e6a0e3661f0e0a1e5f8497643901c2df5b7793c00fe6df072" id=eb47a97d-eecc-4657-800d-b4de7db9fcf2 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.155419087Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0c4205fa-df38-4ccb-97ce-e0e25d4e6386 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.155432989Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ae012a5f676bb3294ee2c3456c9540bf5efa2e3b4c60eea405274e6d846c6cd7,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:aa34e2c327cf462e6a0e3661f0e0a1e5f8497643901c2df5b7793c00fe6df072],Size_:434146386,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=eb47a97d-eecc-4657-800d-b4de7db9fcf2 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.157962119Z" level=info msg="Creating container: openshift-ingress-canary/ingress-canary-pjjrk/serve-healthcheck-canary" id=f63e7c0c-53e6-48b5-a45f-6d789f087809 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.158051428Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.158649680Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:fd66defd59ad9ac6c45d01053333ed7970603ceae5c0e9fb53017f80861f2a8c" id=54c47139-e722-4633-b7d1-16f8b200e25a name=/runtime.v1.ImageService/PullImage Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.158830757Z" level=info msg="Creating container: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-main" id=3da46583-1104-48ad-a0b2-ab4b8ef89756 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.158907768Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.158929030Z" level=info msg="Creating container: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/telemeter-client" id=34927f99-b8fa-4bd4-97df-223437922b3e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.159007154Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.159308848Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:fd66defd59ad9ac6c45d01053333ed7970603ceae5c0e9fb53017f80861f2a8c" id=e144acb4-656c-425c-b20a-4d9638773be1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.160621031Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:09240e84b7322e0441510c8d3f0e1364a5bdda4333e34b5efc6c44678ac1aa25,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:fd66defd59ad9ac6c45d01053333ed7970603ceae5c0e9fb53017f80861f2a8c],Size_:389309863,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e144acb4-656c-425c-b20a-4d9638773be1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.161310174Z" level=info msg="Creating container: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-state-metrics" id=603a9119-dcbe-48a9-b49f-fec1117ef652 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.161386978Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.200908 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.201263141Z" level=info msg="Running pod sandbox: openshift-monitoring/alertmanager-main-1/POD" id=7aa0d34e-f1f7-4188-aa8c-a92e366b7e29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.201297762Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.219607993Z" level=info msg="Removed container 3bc05b8ceb4f61ff81ed707a4ba218ace88d8ccfa1bd54f1bb0b0e92dfdd880c: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=1607971f-09c9-4d37-a917-719ec2699b5c name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.219835 2112 scope.go:115] "RemoveContainer" containerID="11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.220556617Z" level=info msg="Removing container: 11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d" id=c72563a5-373a-457e-ad56-6430adeddb32 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6.scope. Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06.scope. Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4.scope. Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.264294876Z" level=info msg="Removed container 11a751dd154fd575cb11f9a49c150b0fef7ce704095a3e482fae4ee6e420ba8d: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=c72563a5-373a-457e-ad56-6430adeddb32 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.264856 2112 scope.go:115] "RemoveContainer" containerID="ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.268353768Z" level=info msg="Removing container: ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac" id=9b65fe02-9a33-4b06-b066-5a02491b146a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started libcontainer container ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6. Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472.scope. Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started libcontainer container 75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06. Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started libcontainer container f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4. Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.286331287Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 UID:39a23baf-fee4-4b3a-839f-6c0452a117b2 NetNS:/var/run/netns/8f093b03-dba4-401e-9302-36e7bb0b2da3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.286357593Z" level=info msg="Adding pod openshift-monitoring_alertmanager-main-1 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started libcontainer container b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472. Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.311995146Z" level=info msg="Removed container ad26c6eaf15076a267fe3664a852aaae952a90c09961633c76eeb79cae2c68ac: openshift-monitoring/alertmanager-main-1/config-reloader" id=9b65fe02-9a33-4b06-b066-5a02491b146a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.312177 2112 scope.go:115] "RemoveContainer" containerID="0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.313418384Z" level=info msg="Removing container: 0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927" id=90304736-b230-49bf-9fd8-911773482043 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.364075190Z" level=info msg="Removed container 0c5b66fe0d3f11c9b685c0f8090858a10e3b4d6c3ec45f47186509c4d520a927: openshift-monitoring/alertmanager-main-1/alertmanager" id=90304736-b230-49bf-9fd8-911773482043 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.430169272Z" level=info msg="Created container ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/telemeter-client" id=34927f99-b8fa-4bd4-97df-223437922b3e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.431020360Z" level=info msg="Starting container: ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6" id=072e5354-69e1-4cd4-aff8-ade1b4a2e654 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.431561597Z" level=info msg="Created container 75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06: openshift-ingress-canary/ingress-canary-pjjrk/serve-healthcheck-canary" id=f63e7c0c-53e6-48b5-a45f-6d789f087809 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.432004843Z" level=info msg="Starting container: 75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06" id=2acfbe93-2a1e-4ae0-84ab-cf1e01da7d26 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.439433596Z" level=info msg="Created container f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-main" id=3da46583-1104-48ad-a0b2-ab4b8ef89756 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.439758274Z" level=info msg="Created container b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-state-metrics" id=603a9119-dcbe-48a9-b49f-fec1117ef652 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.440908395Z" level=info msg="Starting container: b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472" id=1ae64abf-eb14-4b45-b02d-e09a97257e16 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.441078902Z" level=info msg="Starting container: f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4" id=693fe95f-963d-48de-8301-c38e75078e4f name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.446788148Z" level=info msg="Started container" PID=57744 containerID=ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6 description=openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/telemeter-client id=072e5354-69e1-4cd4-aff8-ade1b4a2e654 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.452063843Z" level=info msg="Started container" PID=57750 containerID=75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06 description=openshift-ingress-canary/ingress-canary-pjjrk/serve-healthcheck-canary id=2acfbe93-2a1e-4ae0-84ab-cf1e01da7d26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.470804935Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d" id=9c14bc6b-b422-423b-ae9d-6382c8939ead name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.474863260Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51eee535b46f8fa059a614084a60e25b9d7f27cc61dacbc265696e915f022f0f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d],Size_:359941570,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9c14bc6b-b422-423b-ae9d-6382c8939ead name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.477775770Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d" id=dfda95ad-72a3-4b52-a023-accbc11e444d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.479782449Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51eee535b46f8fa059a614084a60e25b9d7f27cc61dacbc265696e915f022f0f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d],Size_:359941570,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=dfda95ad-72a3-4b52-a023-accbc11e444d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.483806042Z" level=info msg="Started container" PID=57771 containerID=b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472 description=openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-state-metrics id=1ae64abf-eb14-4b45-b02d-e09a97257e16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.484796955Z" level=info msg="Creating container: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/reload" id=eb76a772-fbf0-4cd7-a87b-af71a56cb092 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.484885688Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.492346441Z" level=info msg="Started container" PID=57755 containerID=f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4 description=openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-main id=693fe95f-963d-48de-8301-c38e75078e4f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8 Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.532042119Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=4728f373-2892-4fb2-b66e-e96e0632568c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.534303724Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4728f373-2892-4fb2-b66e-e96e0632568c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.538292076Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=c86b7721-eb15-49d6-b9f0-b3c44d07da1c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.540198434Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c86b7721-eb15-49d6-b9f0-b3c44d07da1c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.540322136Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=9115f491-177b-46c4-b55d-2c653ac1bda1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24.scope. Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.545058881Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9115f491-177b-46c4-b55d-2c653ac1bda1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.559834645Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=f588d91d-78d1-4280-9ea8-f49c44c318ed name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.560865761Z" level=info msg="Creating container: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-self" id=bbd6ee37-6127-4f36-bbb9-2f5b0086d550 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.560978790Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.564527732Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f588d91d-78d1-4280-9ea8-f49c44c318ed name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.565929701Z" level=info msg="Creating container: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-main" id=de263fa2-a839-4ca0-99df-72147cf103c6 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.566016822Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started libcontainer container e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24. Feb 23 17:13:41 ip-10-0-136-68 systemd-udevd[57909]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:13:41 ip-10-0-136-68 systemd-udevd[57909]: Could not generate persistent MAC address for 201d0ba9a16d3f2: No such file or directory Feb 23 17:13:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 201d0ba9a16d3f2: link is not ready Feb 23 17:13:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:13:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:13:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 201d0ba9a16d3f2: link becomes ready Feb 23 17:13:41 ip-10-0-136-68 NetworkManager[1147]: [1677172421.6028] device (201d0ba9a16d3f2): carrier: link connected Feb 23 17:13:41 ip-10-0-136-68 NetworkManager[1147]: [1677172421.6032] manager: (201d0ba9a16d3f2): new Veth device (/org/freedesktop/NetworkManager/Devices/64) Feb 23 17:13:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00321|bridge|INFO|bridge br-int: added interface 201d0ba9a16d3f2 on port 27 Feb 23 17:13:41 ip-10-0-136-68 NetworkManager[1147]: [1677172421.6548] manager: (201d0ba9a16d3f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65) Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156.scope. Feb 23 17:13:41 ip-10-0-136-68 kernel: device 201d0ba9a16d3f2 entered promiscuous mode Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d.scope. Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started libcontainer container d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156. Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started libcontainer container 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d. Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.757368552Z" level=info msg="Created container e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/reload" id=eb76a772-fbf0-4cd7-a87b-af71a56cb092 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.758836344Z" level=info msg="Starting container: e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24" id=522e3502-583e-473c-bea3-b886006a4a07 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.788758580Z" level=info msg="Started container" PID=57912 containerID=e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24 description=openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/reload id=522e3502-583e-473c-bea3-b886006a4a07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.800649 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.802858589Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=a63009a8-7edb-4f60-992b-0078ffd86227 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.805315426Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a63009a8-7edb-4f60-992b-0078ffd86227 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.806178302Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=0123c453-fb2b-4907-9725-2d85b16feb05 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: I0223 17:13:41.557727 57784 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: 2023-02-23T17:13:41Z [verbose] Add: openshift-monitoring:alertmanager-main-1:39a23baf-fee4-4b3a-839f-6c0452a117b2:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"201d0ba9a16d3f2","mac":"b6:e2:12:c2:6a:b0"},{"name":"eth0","mac":"0a:58:0a:81:02:1d","sandbox":"/var/run/netns/8f093b03-dba4-401e-9302-36e7bb0b2da3"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.29/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: I0223 17:13:41.774077 57743 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-monitoring", Name:"alertmanager-main-1", UID:"39a23baf-fee4-4b3a-839f-6c0452a117b2", APIVersion:"v1", ResourceVersion:"68478", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.29/23] from ovn-kubernetes Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.808191277Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 UID:39a23baf-fee4-4b3a-839f-6c0452a117b2 NetNS:/var/run/netns/8f093b03-dba4-401e-9302-36e7bb0b2da3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.808327424Z" level=info msg="Checking pod openshift-monitoring_alertmanager-main-1 for CNI network multus-cni-network (type=multus)" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.808353094Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0123c453-fb2b-4907-9725-2d85b16feb05 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-284c6382193ef02de764b0fe700644f49f21207912d0419b97166016511c34ad-merged.mount: Succeeded. Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.809766694Z" level=info msg="Creating container: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/kube-rbac-proxy" id=5b1bda57-f05b-4e2f-a309-8cde5e7dfeb3 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.809880037Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.815323075Z" level=info msg="Ran pod sandbox 201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 with infra container: openshift-monitoring/alertmanager-main-1/POD" id=7aa0d34e-f1f7-4188-aa8c-a92e366b7e29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.820184117Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3" id=b107b923-e2ff-44e4-8fa2-2edaf21d1598 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.823031 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" event=&{ID:0a5a348d-9766-4727-93ec-147703d44b68 Type:ContainerStarted Data:e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24} Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.823065 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" event=&{ID:0a5a348d-9766-4727-93ec-147703d44b68 Type:ContainerStarted Data:ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6} Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.823928 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pjjrk" event=&{ID:e0abac93-3e79-4a32-8375-5ef1a2e59687 Type:ContainerStarted Data:75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06} Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.825586 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" event=&{ID:4961f202-10a7-460b-8e62-ce7b7dbb8806 Type:ContainerStarted Data:b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472} Feb 23 17:13:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:41.826387 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" event=&{ID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 Type:ContainerStarted Data:f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4} Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.827499415Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3 not found" id=b107b923-e2ff-44e4-8fa2-2edaf21d1598 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.828378653Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3" id=89e08eda-f240-4780-943b-b0dfa5a8d8b1 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.829307601Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3\"" Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6.scope. Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.916757831Z" level=info msg="Created container 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-main" id=de263fa2-a839-4ca0-99df-72147cf103c6 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.917276314Z" level=info msg="Starting container: 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d" id=1fadc161-38e2-4466-9303-38c222a864b4 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started libcontainer container 707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6. Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.931510980Z" level=info msg="Started container" PID=57971 containerID=1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d description=openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-main id=1fadc161-38e2-4466-9303-38c222a864b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.946841252Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=e0bfee00-27ae-4c1a-b8db-df27c40b1878 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.948849837Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e0bfee00-27ae-4c1a-b8db-df27c40b1878 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.949530648Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=8d94902b-a8d4-4ca3-8988-36dae16678a9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.953270073Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8d94902b-a8d4-4ca3-8988-36dae16678a9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.954223064Z" level=info msg="Creating container: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-self" id=bebb1ee2-5c07-4b8f-abd4-21c0b2d93b4b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.954319847Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.971816647Z" level=info msg="Created container d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-self" id=bbd6ee37-6127-4f36-bbb9-2f5b0086d550 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.972179187Z" level=info msg="Starting container: d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156" id=4dc1e755-4107-491c-80e4-cd393df3c78f name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:41.986352114Z" level=info msg="Started container" PID=57964 containerID=d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156 description=openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-self id=4dc1e755-4107-491c-80e4-cd393df3c78f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8 Feb 23 17:13:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322.scope. Feb 23 17:13:42 ip-10-0-136-68 systemd[1]: Started libcontainer container 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322. Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.030568297Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cda1e1a5229df01b50e823e19e839ec0f818813b948e25f22879b27bfe46faaa" id=0f7ac220-5ee8-4c17-be3b-ba66f2e30f9c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.030997112Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cda1e1a5229df01b50e823e19e839ec0f818813b948e25f22879b27bfe46faaa not found" id=0f7ac220-5ee8-4c17-be3b-ba66f2e30f9c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.031683627Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cda1e1a5229df01b50e823e19e839ec0f818813b948e25f22879b27bfe46faaa" id=a6c62d35-74b6-4bd1-88a9-f6b040fbc322 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.033137876Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cda1e1a5229df01b50e823e19e839ec0f818813b948e25f22879b27bfe46faaa\"" Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.049633005Z" level=info msg="Created container 707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/kube-rbac-proxy" id=5b1bda57-f05b-4e2f-a309-8cde5e7dfeb3 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.050129989Z" level=info msg="Starting container: 707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6" id=db6ed120-9cf5-4a4e-8f85-356dcabe0cb9 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.061413109Z" level=info msg="Started container" PID=58029 containerID=707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6 description=openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/kube-rbac-proxy id=db6ed120-9cf5-4a4e-8f85-356dcabe0cb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf Feb 23 17:13:42 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:42.121209 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cd707766-c226-46f5-b391-aa1689d95e81 path="/var/lib/kubelet/pods/cd707766-c226-46f5-b391-aa1689d95e81/volumes" Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.121818888Z" level=info msg="Created container 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-self" id=bebb1ee2-5c07-4b8f-abd4-21c0b2d93b4b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.122086015Z" level=info msg="Starting container: 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322" id=b1c6ede7-314d-4d71-a543-3e537fac0ddf name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:42.129497300Z" level=info msg="Started container" PID=58088 containerID=359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322 description=openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-self id=b1c6ede7-314d-4d71-a543-3e537fac0ddf name=/runtime.v1.RuntimeService/StartContainer sandboxID=a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c Feb 23 17:13:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00322|connmgr|INFO|br-ex<->unix#1152: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:42 ip-10-0-136-68 systemd[1]: run-runc-707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6-runc.GJLWLH.mount: Succeeded. Feb 23 17:13:42 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:42.831552 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" event=&{ID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 Type:ContainerStarted Data:d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156} Feb 23 17:13:42 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:42.832207 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerStarted Data:201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842} Feb 23 17:13:42 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:42.833371 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" event=&{ID:0a5a348d-9766-4727-93ec-147703d44b68 Type:ContainerStarted Data:707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6} Feb 23 17:13:42 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:42.834570 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" event=&{ID:4961f202-10a7-460b-8e62-ce7b7dbb8806 Type:ContainerStarted Data:359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322} Feb 23 17:13:42 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:42.834600 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" event=&{ID:4961f202-10a7-460b-8e62-ce7b7dbb8806 Type:ContainerStarted Data:1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d} Feb 23 17:13:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:43.101022343Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cda1e1a5229df01b50e823e19e839ec0f818813b948e25f22879b27bfe46faaa\"" Feb 23 17:13:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:43.155585647Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3\"" Feb 23 17:13:44 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.EgLfld.mount: Succeeded. Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.076915137Z" level=info msg="Stopping pod sandbox: 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59" id=e85f6310-d5c0-4ee2-a028-e30606369f2d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.076961516Z" level=info msg="Stopped pod sandbox (already stopped): 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59" id=e85f6310-d5c0-4ee2-a028-e30606369f2d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.077465700Z" level=info msg="Removing pod sandbox: 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59" id=b579bfc1-eb24-432c-906c-1b9a99bed37b name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.090074339Z" level=info msg="Removed pod sandbox: 7c6c96994776deca879653ce086a42486849bd751719f15b1a7fe21817c40d59" id=b579bfc1-eb24-432c-906c-1b9a99bed37b name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.090331124Z" level=info msg="Stopping pod sandbox: d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66" id=d0aa8cf8-5c19-43be-8d13-0fb7391dfb52 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.090355379Z" level=info msg="Stopped pod sandbox (already stopped): d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66" id=d0aa8cf8-5c19-43be-8d13-0fb7391dfb52 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.090558261Z" level=info msg="Removing pod sandbox: d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66" id=dd8499c3-1477-4c28-b502-c0dec2ec4dd0 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.100695364Z" level=info msg="Removed pod sandbox: d57b42d0ba3aca44e3beb581dd5ff05ba03a846c20891f67b79ec7e09ad8fb66" id=dd8499c3-1477-4c28-b502-c0dec2ec4dd0 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.100927974Z" level=info msg="Stopping pod sandbox: d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f" id=d4a209aa-dda8-4a04-bf33-7c051db3ab31 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.100951550Z" level=info msg="Stopped pod sandbox (already stopped): d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f" id=d4a209aa-dda8-4a04-bf33-7c051db3ab31 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.101139751Z" level=info msg="Removing pod sandbox: d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f" id=6e5bb795-719e-4fb4-a77e-df8f7deef52c name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.113355040Z" level=info msg="Removed pod sandbox: d1ad2e51d68aca03cf74cbf08b34debc7171c5e2526858978cf073bb895b423f" id=6e5bb795-719e-4fb4-a77e-df8f7deef52c name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.113569992Z" level=info msg="Stopping pod sandbox: 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608" id=d70b1907-3fa2-46c9-9b09-4909c2d5c2b8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.113591327Z" level=info msg="Stopped pod sandbox (already stopped): 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608" id=d70b1907-3fa2-46c9-9b09-4909c2d5c2b8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.113849394Z" level=info msg="Removing pod sandbox: 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608" id=24495756-1230-4818-af0b-bbe756b221b1 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.123958615Z" level=info msg="Removed pod sandbox: 2dd14dc79891cf23e7c39348cd7da2169eeea951880734c51df6372fcee24608" id=24495756-1230-4818-af0b-bbe756b221b1 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.124168796Z" level=info msg="Stopping pod sandbox: 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4" id=1bba780a-1047-46b2-8961-0815555c7d94 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.124193491Z" level=info msg="Stopped pod sandbox (already stopped): 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4" id=1bba780a-1047-46b2-8961-0815555c7d94 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.124355422Z" level=info msg="Removing pod sandbox: 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4" id=c0c199c3-cc87-44e7-b751-82f268536cec name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.132888112Z" level=info msg="Removed pod sandbox: 4072e0d3663d1947c88bd362e51de1a466d380546ca12a245e113666b5f4f1b4" id=c0c199c3-cc87-44e7-b751-82f268536cec name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.133126360Z" level=info msg="Stopping pod sandbox: 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a" id=2bc791f2-e264-4d47-8553-42bec344aefa name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.133155397Z" level=info msg="Stopped pod sandbox (already stopped): 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a" id=2bc791f2-e264-4d47-8553-42bec344aefa name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.133331574Z" level=info msg="Removing pod sandbox: 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a" id=986b28a9-e72f-4872-82c1-a971b9d73740 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:46.141729082Z" level=info msg="Removed pod sandbox: 663ed760da6d8b1ab7f3854fc18ffbb30231f0ccec6ea05a4feac0f8f3a8d79a" id=986b28a9-e72f-4872-82c1-a971b9d73740 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:13:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:46.143823 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80\": container with ID starting with 6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80 not found: ID does not exist" containerID="6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80" Feb 23 17:13:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:46.143865 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80" err="rpc error: code = NotFound desc = could not find container \"6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80\": container with ID starting with 6c416d5c623aca70659a2b828f0d1ddff8c2215a5ed94cd2d3df6302ee724c80 not found: ID does not exist" Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.384596915Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cda1e1a5229df01b50e823e19e839ec0f818813b948e25f22879b27bfe46faaa" id=a6c62d35-74b6-4bd1-88a9-f6b040fbc322 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.385428212Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cda1e1a5229df01b50e823e19e839ec0f818813b948e25f22879b27bfe46faaa" id=27e40332-cab8-4539-9613-e3c36cce46d7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.386900994Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8b27578c5c692d0bec9b4fc5ed04267444eda63ca2c120f44496a300a23ed660,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cda1e1a5229df01b50e823e19e839ec0f818813b948e25f22879b27bfe46faaa],Size_:379884929,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=27e40332-cab8-4539-9613-e3c36cce46d7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.387702637Z" level=info msg="Creating container: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/openshift-state-metrics" id=1689cd6b-f13e-4a61-88f9-aa372d40f3c1 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.387780717Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c.scope. Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.438870835Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3" id=89e08eda-f240-4780-943b-b0dfa5a8d8b1 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.443950688Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3" id=36cd9609-aced-4b71-bcd2-bdf9f23f8cb5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.445412378Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5dbe812e7b4c7ebeea3eb6830cd76c6c524b96d63120f288300e5801b416462,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3],Size_:412546459,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=36cd9609-aced-4b71-bcd2-bdf9f23f8cb5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.446273176Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager" id=15521e59-04f5-4f62-a1ed-1879b3980b6e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.446353408Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:48 ip-10-0-136-68 systemd[1]: run-runc-329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c-runc.WEwe6U.mount: Succeeded. Feb 23 17:13:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c. Feb 23 17:13:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c.scope. Feb 23 17:13:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c. Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.545076559Z" level=info msg="Created container 329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/openshift-state-metrics" id=1689cd6b-f13e-4a61-88f9-aa372d40f3c1 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.545511217Z" level=info msg="Starting container: 329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c" id=7091777f-dc39-4722-a242-8cb65e074e95 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.555192007Z" level=info msg="Created container 6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c: openshift-monitoring/alertmanager-main-1/alertmanager" id=15521e59-04f5-4f62-a1ed-1879b3980b6e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.555545854Z" level=info msg="Starting container: 6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c" id=4bd1b5e0-5c85-4b0a-976a-3ae9bdfa7f26 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.562245364Z" level=info msg="Started container" PID=58311 containerID=6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c description=openshift-monitoring/alertmanager-main-1/alertmanager id=4bd1b5e0-5c85-4b0a-976a-3ae9bdfa7f26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.571219831Z" level=info msg="Started container" PID=58291 containerID=329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c description=openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/openshift-state-metrics id=7091777f-dc39-4722-a242-8cb65e074e95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8 Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.574794135Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d" id=95435043-df27-4864-9169-69445828a792 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.575023799Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51eee535b46f8fa059a614084a60e25b9d7f27cc61dacbc265696e915f022f0f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d],Size_:359941570,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=95435043-df27-4864-9169-69445828a792 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.577775371Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d" id=ec671e0f-da0f-43e0-877f-bc5347e1c816 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.577972209Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51eee535b46f8fa059a614084a60e25b9d7f27cc61dacbc265696e915f022f0f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0f2284b974da7a5af712db888818e4bbd7604e604bc70c86ce8bc75c8f73457d],Size_:359941570,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=ec671e0f-da0f-43e0-877f-bc5347e1c816 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.579104242Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/config-reloader" id=de952b03-9655-427c-8bc5-c298b69766f8 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.579219531Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190.scope. Feb 23 17:13:48 ip-10-0-136-68 systemd[1]: Started libcontainer container b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190. Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.709028158Z" level=info msg="Created container b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190: openshift-monitoring/alertmanager-main-1/config-reloader" id=de952b03-9655-427c-8bc5-c298b69766f8 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.709473041Z" level=info msg="Starting container: b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190" id=f7953140-be70-4019-8ee6-b9eda4c46805 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.716495995Z" level=info msg="Started container" PID=58380 containerID=b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190 description=openshift-monitoring/alertmanager-main-1/config-reloader id=f7953140-be70-4019-8ee6-b9eda4c46805 name=/runtime.v1.RuntimeService/StartContainer sandboxID=201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.725259331Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=71ceb7d7-b427-496a-88a4-c7d494a7786e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.725420270Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad not found" id=71ceb7d7-b427-496a-88a4-c7d494a7786e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.725967580Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=48183a53-0c58-46db-a61c-77d52274ce51 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:48.726876716Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad\"" Feb 23 17:13:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:48.850532 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" event=&{ID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 Type:ContainerStarted Data:329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c} Feb 23 17:13:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:48.852535 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerStarted Data:b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190} Feb 23 17:13:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:48.852558 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerStarted Data:6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c} Feb 23 17:13:49 ip-10-0-136-68 systemd[1]: run-runc-6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c-runc.xrcI90.mount: Succeeded. Feb 23 17:13:49 ip-10-0-136-68 conmon[58299]: conmon 6bb5a5ead71a329476ae : container 58311 exited with status 1 Feb 23 17:13:49 ip-10-0-136-68 systemd[1]: crio-6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c.scope: Succeeded. Feb 23 17:13:49 ip-10-0-136-68 systemd[1]: crio-6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c.scope: Consumed 107ms CPU time Feb 23 17:13:49 ip-10-0-136-68 systemd[1]: crio-conmon-6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c.scope: Succeeded. Feb 23 17:13:49 ip-10-0-136-68 systemd[1]: crio-conmon-6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c.scope: Consumed 26ms CPU time Feb 23 17:13:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:49.855582 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager/0.log" Feb 23 17:13:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:49.855634 2112 generic.go:296] "Generic (PLEG): container finished" podID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerID="6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c" exitCode=1 Feb 23 17:13:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:49.855725 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerDied Data:6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c} Feb 23 17:13:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:49.985280599Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad\"" Feb 23 17:13:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:50.657914 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ingress/router-default-77f788594f-j5twb] Feb 23 17:13:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:50.658112 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ingress/router-default-77f788594f-j5twb" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerName="router" containerID="cri-o://aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4" gracePeriod=3600 Feb 23 17:13:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:50.658450556Z" level=info msg="Stopping container: aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4 (timeout: 3600s)" id=192de3c3-da02-4211-b377-b142f9983999 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.342780 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-image-registry/node-ca-wdtzq] Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.342970 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-image-registry/node-ca-wdtzq" podUID=ecd261a9-4d88-4e3d-aa47-803a685b6569 containerName="node-ca" containerID="cri-o://feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4" gracePeriod=30 Feb 23 17:13:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:54.343301891Z" level=info msg="Stopping container: feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4 (timeout: 30s)" id=b5e3679d-5b62-43ff-a37e-e621f9e7942a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: crio-conmon-feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4.scope: Succeeded. Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: crio-conmon-feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4.scope: Consumed 32ms CPU time Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: crio-feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4.scope: Succeeded. Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: crio-feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4.scope: Consumed 726ms CPU time Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-08065b85fc955c9f3281e01686275234174e33221143d677cda7090e4b452e25-merged.mount: Succeeded. Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-08065b85fc955c9f3281e01686275234174e33221143d677cda7090e4b452e25-merged.mount: Consumed 0 CPU time Feb 23 17:13:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:54.527217530Z" level=info msg="Stopped container feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4: openshift-image-registry/node-ca-wdtzq/node-ca" id=b5e3679d-5b62-43ff-a37e-e621f9e7942a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:13:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:54.527580440Z" level=info msg="Stopping pod sandbox: ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc" id=2c0c8b8d-9353-4004-9721-fd0040aa7419 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c2fc37fce76f412fafc51c77a072559f530575834c131275d223be1d9faa0031-merged.mount: Succeeded. Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c2fc37fce76f412fafc51c77a072559f530575834c131275d223be1d9faa0031-merged.mount: Consumed 0 CPU time Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: run-utsns-a65980b3\x2d1d5c\x2d4cb0\x2d921c\x2d895e293a797e.mount: Succeeded. Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: run-utsns-a65980b3\x2d1d5c\x2d4cb0\x2d921c\x2d895e293a797e.mount: Consumed 0 CPU time Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: run-ipcns-a65980b3\x2d1d5c\x2d4cb0\x2d921c\x2d895e293a797e.mount: Succeeded. Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: run-ipcns-a65980b3\x2d1d5c\x2d4cb0\x2d921c\x2d895e293a797e.mount: Consumed 0 CPU time Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: run-netns-a65980b3\x2d1d5c\x2d4cb0\x2d921c\x2d895e293a797e.mount: Succeeded. Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: run-netns-a65980b3\x2d1d5c\x2d4cb0\x2d921c\x2d895e293a797e.mount: Consumed 0 CPU time Feb 23 17:13:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:54.597738911Z" level=info msg="Stopped pod sandbox: ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc" id=2c0c8b8d-9353-4004-9721-fd0040aa7419 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.761309 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca\") pod \"ecd261a9-4d88-4e3d-aa47-803a685b6569\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.761351 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host\") pod \"ecd261a9-4d88-4e3d-aa47-803a685b6569\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.761434 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqpfc\" (UniqueName: \"kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc\") pod \"ecd261a9-4d88-4e3d-aa47-803a685b6569\" (UID: \"ecd261a9-4d88-4e3d-aa47-803a685b6569\") " Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.762830 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host" (OuterVolumeSpecName: "host") pod "ecd261a9-4d88-4e3d-aa47-803a685b6569" (UID: "ecd261a9-4d88-4e3d-aa47-803a685b6569"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:54.762984 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/ecd261a9-4d88-4e3d-aa47-803a685b6569/volumes/kubernetes.io~configmap/serviceca: clearQuota called, but quotas disabled Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.763287 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca" (OuterVolumeSpecName: "serviceca") pod "ecd261a9-4d88-4e3d-aa47-803a685b6569" (UID: "ecd261a9-4d88-4e3d-aa47-803a685b6569"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.776840 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc" (OuterVolumeSpecName: "kube-api-access-jqpfc") pod "ecd261a9-4d88-4e3d-aa47-803a685b6569" (UID: "ecd261a9-4d88-4e3d-aa47-803a685b6569"). InnerVolumeSpecName "kube-api-access-jqpfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.863051 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-jqpfc\" (UniqueName: \"kubernetes.io/projected/ecd261a9-4d88-4e3d-aa47-803a685b6569-kube-api-access-jqpfc\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.863082 2112 reconciler.go:399] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecd261a9-4d88-4e3d-aa47-803a685b6569-serviceca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.863091 2112 reconciler.go:399] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecd261a9-4d88-4e3d-aa47-803a685b6569-host\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.901872 2112 generic.go:296] "Generic (PLEG): container finished" podID=ecd261a9-4d88-4e3d-aa47-803a685b6569 containerID="feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4" exitCode=0 Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.901910 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wdtzq" event=&{ID:ecd261a9-4d88-4e3d-aa47-803a685b6569 Type:ContainerDied Data:feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4} Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.901942 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wdtzq" event=&{ID:ecd261a9-4d88-4e3d-aa47-803a685b6569 Type:ContainerDied Data:ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc} Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.901961 2112 scope.go:115] "RemoveContainer" containerID="feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4" Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podecd261a9_4d88_4e3d_aa47_803a685b6569.slice. Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: kubepods-burstable-podecd261a9_4d88_4e3d_aa47_803a685b6569.slice: Consumed 758ms CPU time Feb 23 17:13:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:54.913386507Z" level=info msg="Removing container: feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4" id=843e81e3-e0d2-4a55-bfe8-f06859839e25 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.929784 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-image-registry/node-ca-wdtzq] Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.945465 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-image-registry/node-ca-wdtzq] Feb 23 17:13:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:54.948973985Z" level=info msg="Removed container feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4: openshift-image-registry/node-ca-wdtzq/node-ca" id=843e81e3-e0d2-4a55-bfe8-f06859839e25 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.949140 2112 scope.go:115] "RemoveContainer" containerID="feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:54.949401 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4\": container with ID starting with feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4 not found: ID does not exist" containerID="feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.949438 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4} err="failed to get container status \"feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4\": rpc error: code = NotFound desc = could not find container \"feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4\": container with ID starting with feb17ee26beeef149a61cde12c7ab2755e8ec793f9aadc49fcb709f950299ce4 not found: ID does not exist" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.960770 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-image-registry/node-ca-wsg6f] Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.960816 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:13:54.960893 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecd261a9-4d88-4e3d-aa47-803a685b6569" containerName="node-ca" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.960904 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd261a9-4d88-4e3d-aa47-803a685b6569" containerName="node-ca" Feb 23 17:13:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:54.960976 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="ecd261a9-4d88-4e3d-aa47-803a685b6569" containerName="node-ca" Feb 23 17:13:54 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podbd2da6fb_b383_40fe_a3ad_b6436a02985b.slice. Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.064458 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxsmb\" (UniqueName: \"kubernetes.io/projected/bd2da6fb-b383-40fe-a3ad-b6436a02985b-kube-api-access-cxsmb\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.064539 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2da6fb-b383-40fe-a3ad-b6436a02985b-host\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.064620 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd2da6fb-b383-40fe-a3ad-b6436a02985b-serviceca\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.165421 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2da6fb-b383-40fe-a3ad-b6436a02985b-host\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.165463 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd2da6fb-b383-40fe-a3ad-b6436a02985b-serviceca\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.165506 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-cxsmb\" (UniqueName: \"kubernetes.io/projected/bd2da6fb-b383-40fe-a3ad-b6436a02985b-kube-api-access-cxsmb\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.165557 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2da6fb-b383-40fe-a3ad-b6436a02985b-host\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.166680 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd2da6fb-b383-40fe-a3ad-b6436a02985b-serviceca\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.180846 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxsmb\" (UniqueName: \"kubernetes.io/projected/bd2da6fb-b383-40fe-a3ad-b6436a02985b-kube-api-access-cxsmb\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.278019 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:13:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:55.278511368Z" level=info msg="Running pod sandbox: openshift-image-registry/node-ca-wsg6f/POD" id=d06cc46f-b3fe-4c0b-9f3b-d5a8626de0ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:55.278567939Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:55.297008809Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=d06cc46f-b3fe-4c0b-9f3b-d5a8626de0ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:13:55.300400 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd2da6fb_b383_40fe_a3ad_b6436a02985b.slice/crio-c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab.scope WatchSource:0}: Error finding container c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab: Status 404 returned error can't find the container with id c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab Feb 23 17:13:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:55.302408502Z" level=info msg="Ran pod sandbox c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab with infra container: openshift-image-registry/node-ca-wsg6f/POD" id=d06cc46f-b3fe-4c0b-9f3b-d5a8626de0ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:13:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:55.303161017Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563" id=6c4c922f-498d-4719-a73d-0e754ad72f46 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:55.303360884Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563 not found" id=6c4c922f-498d-4719-a73d-0e754ad72f46 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:55.303921053Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563" id=74566d2e-c9b6-4299-bef3-a56ceac279e7 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:55.304739289Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563\"" Feb 23 17:13:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc-userdata-shm.mount: Succeeded. Feb 23 17:13:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:13:55 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ecd261a9\x2d4d88\x2d4e3d\x2daa47\x2d803a685b6569-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqpfc.mount: Succeeded. Feb 23 17:13:55 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ecd261a9\x2d4d88\x2d4e3d\x2daa47\x2d803a685b6569-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqpfc.mount: Consumed 0 CPU time Feb 23 17:13:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:55.904864 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wsg6f" event=&{ID:bd2da6fb-b383-40fe-a3ad-b6436a02985b Type:ContainerStarted Data:c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab} Feb 23 17:13:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:56.124068 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ecd261a9-4d88-4e3d-aa47-803a685b6569 path="/var/lib/kubelet/pods/ecd261a9-4d88-4e3d-aa47-803a685b6569/volumes" Feb 23 17:13:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00323|connmgr|INFO|br-int<->unix#2: 1041 flow_mods in the 53 s starting 57 s ago (533 adds, 508 deletes) Feb 23 17:13:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:56.502631486Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563\"" Feb 23 17:13:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:57.275708 2112 patch_prober.go:29] interesting pod/router-default-77f788594f-j5twb container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:13:57 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:13:57 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:13:57 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:13:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:57.275781 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-77f788594f-j5twb" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:13:57 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00324|connmgr|INFO|br-ex<->unix#1160: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.338988825Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=48183a53-0c58-46db-a61c-77d52274ce51 name=/runtime.v1.ImageService/PullImage Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.339818257Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=a18276e6-1351-4839-8690-bc8e9015ee6b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.340893623Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c1d577960d1c46e90165da215c04054d71634cb8701ebd504e510368ee7bd65,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad],Size_:366055841,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a18276e6-1351-4839-8690-bc8e9015ee6b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.341632709Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=c457dc95-2132-4fbc-b62d-112a93d65bab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.341770317Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:58 ip-10-0-136-68 systemd[1]: Started crio-conmon-8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9.scope. Feb 23 17:13:58 ip-10-0-136-68 systemd[1]: run-runc-8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9-runc.dX9I5T.mount: Succeeded. Feb 23 17:13:58 ip-10-0-136-68 systemd[1]: Started libcontainer container 8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9. Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.470625426Z" level=info msg="Created container 8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=c457dc95-2132-4fbc-b62d-112a93d65bab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.471027115Z" level=info msg="Starting container: 8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9" id=29cab203-3fff-47e7-9372-816109acd9e9 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.489151144Z" level=info msg="Started container" PID=58740 containerID=8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9 description=openshift-monitoring/alertmanager-main-1/alertmanager-proxy id=29cab203-3fff-47e7-9372-816109acd9e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.500366213Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=a58cd70a-ee99-4eb8-89ef-1ac12e4ba914 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.500529931Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a58cd70a-ee99-4eb8-89ef-1ac12e4ba914 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.501147129Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=f9431f74-5b22-4d37-aac3-2bb5040d2f25 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.501346897Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f9431f74-5b22-4d37-aac3-2bb5040d2f25 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.502257713Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=4037a089-faca-4f8f-95eb-056c5806605a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.502361856Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:58 ip-10-0-136-68 systemd[1]: Started crio-conmon-1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551.scope. Feb 23 17:13:58 ip-10-0-136-68 systemd[1]: Started libcontainer container 1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551. Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.612368903Z" level=info msg="Created container 1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=4037a089-faca-4f8f-95eb-056c5806605a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.612810359Z" level=info msg="Starting container: 1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551" id=6c91cda7-604c-4025-aa77-6945f04551db name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.619955076Z" level=info msg="Started container" PID=58783 containerID=1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551 description=openshift-monitoring/alertmanager-main-1/kube-rbac-proxy id=6c91cda7-604c-4025-aa77-6945f04551db name=/runtime.v1.RuntimeService/StartContainer sandboxID=201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.629160989Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=f505c901-7c99-43e1-892e-7a06d4075c55 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.629324577Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f505c901-7c99-43e1-892e-7a06d4075c55 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.630037880Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=b4d0838c-88ee-4ed5-a17b-1bf03531aee1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.630208496Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b4d0838c-88ee-4ed5-a17b-1bf03531aee1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.630979812Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=eea861d4-3c21-477c-83f7-2fffefc023f0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.631066376Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:13:58 ip-10-0-136-68 systemd[1]: Started crio-conmon-a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4.scope. Feb 23 17:13:58 ip-10-0-136-68 systemd[1]: Started libcontainer container a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4. Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.726338211Z" level=info msg="Created container a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=eea861d4-3c21-477c-83f7-2fffefc023f0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.726757892Z" level=info msg="Starting container: a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4" id=60af7059-5948-4f52-9079-1d055b6f3d53 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.733774375Z" level=info msg="Started container" PID=58829 containerID=a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4 description=openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric id=60af7059-5948-4f52-9079-1d055b6f3d53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.741901251Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a9607c8b5ab38ce556e52abc5260e6419d2564d361be49c6a6ae5158c3ecaba" id=5aea344d-0740-461e-b361-0acf77ab99d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.742118165Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a9607c8b5ab38ce556e52abc5260e6419d2564d361be49c6a6ae5158c3ecaba not found" id=5aea344d-0740-461e-b361-0acf77ab99d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.742817172Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a9607c8b5ab38ce556e52abc5260e6419d2564d361be49c6a6ae5158c3ecaba" id=90ce31b5-f504-4196-af01-c061f97ac46f name=/runtime.v1.ImageService/PullImage Feb 23 17:13:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:58.744077232Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a9607c8b5ab38ce556e52abc5260e6419d2564d361be49c6a6ae5158c3ecaba\"" Feb 23 17:13:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:58.912761 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager/0.log" Feb 23 17:13:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:58.912802 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerStarted Data:a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4} Feb 23 17:13:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:58.912820 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerStarted Data:1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551} Feb 23 17:13:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:13:58.912829 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerStarted Data:8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9} Feb 23 17:13:59 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:13:59.882224452Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a9607c8b5ab38ce556e52abc5260e6419d2564d361be49c6a6ae5158c3ecaba\"" Feb 23 17:14:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:00.630846158Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563" id=74566d2e-c9b6-4299-bef3-a56ceac279e7 name=/runtime.v1.ImageService/PullImage Feb 23 17:14:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:00.631522480Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563" id=bf09dc4a-590c-4bf6-ab42-93dafdceffc1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:00.632794895Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5483c0b731765f8135bdbd734fa974193843b100648d623cc217de693f0adbd5,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563],Size_:423974017,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bf09dc4a-590c-4bf6-ab42-93dafdceffc1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:00.633439601Z" level=info msg="Creating container: openshift-image-registry/node-ca-wsg6f/node-ca" id=28827517-4541-4668-b04c-9849fa3d6cca name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:00.633512885Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:14:00 ip-10-0-136-68 systemd[1]: Started crio-conmon-0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb.scope. Feb 23 17:14:00 ip-10-0-136-68 systemd[1]: Started libcontainer container 0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb. Feb 23 17:14:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:00.793102654Z" level=info msg="Created container 0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb: openshift-image-registry/node-ca-wsg6f/node-ca" id=28827517-4541-4668-b04c-9849fa3d6cca name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:00.793532292Z" level=info msg="Starting container: 0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb" id=1cbf299a-fb93-4c8b-8abb-2b7078bbad95 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:14:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:00.800784524Z" level=info msg="Started container" PID=58925 containerID=0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb description=openshift-image-registry/node-ca-wsg6f/node-ca id=1cbf299a-fb93-4c8b-8abb-2b7078bbad95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab Feb 23 17:14:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:00.917292 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wsg6f" event=&{ID:bd2da6fb-b383-40fe-a3ad-b6436a02985b Type:ContainerStarted Data:0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb} Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.185895261Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a9607c8b5ab38ce556e52abc5260e6419d2564d361be49c6a6ae5158c3ecaba" id=90ce31b5-f504-4196-af01-c061f97ac46f name=/runtime.v1.ImageService/PullImage Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.186584396Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a9607c8b5ab38ce556e52abc5260e6419d2564d361be49c6a6ae5158c3ecaba" id=5fc52db5-81db-488a-81d5-a91d1fe99665 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.187838402Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0573ce164a5311eba33dd14943bf6384e8f10c729cb5527e7d6f552cda9be98d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9a9607c8b5ab38ce556e52abc5260e6419d2564d361be49c6a6ae5158c3ecaba],Size_:370746729,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5fc52db5-81db-488a-81d5-a91d1fe99665 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.188508177Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=3c978288-b2ec-4be6-bbb0-3fcc20e744ba name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.188598472Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:14:04 ip-10-0-136-68 systemd[1]: Started crio-conmon-d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97.scope. Feb 23 17:14:04 ip-10-0-136-68 systemd[1]: run-runc-d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97-runc.VaewXd.mount: Succeeded. Feb 23 17:14:04 ip-10-0-136-68 systemd[1]: Started libcontainer container d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97. Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.301031003Z" level=info msg="Created container d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=3c978288-b2ec-4be6-bbb0-3fcc20e744ba name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.301446973Z" level=info msg="Starting container: d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97" id=6748849b-a12e-4d51-ac8c-a5ba55a2b792 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.307905689Z" level=info msg="Started container" PID=59018 containerID=d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97 description=openshift-monitoring/alertmanager-main-1/prom-label-proxy id=6748849b-a12e-4d51-ac8c-a5ba55a2b792 name=/runtime.v1.RuntimeService/StartContainer sandboxID=201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 Feb 23 17:14:04 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.bR0JdZ.mount: Succeeded. Feb 23 17:14:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:04.926751 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager/0.log" Feb 23 17:14:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:04.926807 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerStarted Data:d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97} Feb 23 17:14:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:04.927150 2112 scope.go:115] "RemoveContainer" containerID="6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c" Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.928189509Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3" id=5779571c-a9d0-412b-b0af-b95286cd3905 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.928369202Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5dbe812e7b4c7ebeea3eb6830cd76c6c524b96d63120f288300e5801b416462,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3],Size_:412546459,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5779571c-a9d0-412b-b0af-b95286cd3905 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.929071782Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3" id=481a9a83-e419-4c36-8e7a-99ad4cbc8ad9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.929206400Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5dbe812e7b4c7ebeea3eb6830cd76c6c524b96d63120f288300e5801b416462,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:10a5c4496e21c74681484951c4f7b1059355127b627962c4663f32391ed151c3],Size_:412546459,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=481a9a83-e419-4c36-8e7a-99ad4cbc8ad9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.929987575Z" level=info msg="Creating container: openshift-monitoring/alertmanager-main-1/alertmanager" id=94bd18ae-40d0-4b14-b151-6b274c9d0b7c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:04.930099970Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:14:04 ip-10-0-136-68 systemd[1]: Started crio-conmon-2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6.scope. Feb 23 17:14:04 ip-10-0-136-68 systemd[1]: Started libcontainer container 2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6. Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.048242976Z" level=info msg="Created container 2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6: openshift-monitoring/alertmanager-main-1/alertmanager" id=94bd18ae-40d0-4b14-b151-6b274c9d0b7c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.048646497Z" level=info msg="Starting container: 2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6" id=a0fec661-4ddf-428a-bd4c-d86f65cc0bed name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.055359498Z" level=info msg="Started container" PID=59085 containerID=2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6 description=openshift-monitoring/alertmanager-main-1/alertmanager id=a0fec661-4ddf-428a-bd4c-d86f65cc0bed name=/runtime.v1.RuntimeService/StartContainer sandboxID=201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.616356 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-8g56r] Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.616566 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" podUID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerName="thanos-query" containerID="cri-o://ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08" gracePeriod=120 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.616627 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" podUID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerName="kube-rbac-proxy-rules" containerID="cri-o://8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c" gracePeriod=120 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.616711 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" podUID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerName="oauth-proxy" containerID="cri-o://e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b" gracePeriod=120 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.616727 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" podUID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerName="kube-rbac-proxy" containerID="cri-o://7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d" gracePeriod=120 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.616748 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" podUID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerName="prom-label-proxy" containerID="cri-o://31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d" gracePeriod=120 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.616834 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" podUID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerName="kube-rbac-proxy-metrics" containerID="cri-o://771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba" gracePeriod=120 Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.617079741Z" level=info msg="Stopping container: 8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c (timeout: 120s)" id=a410c638-b6d9-4dc8-9e0f-511ec886595f name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.617109079Z" level=info msg="Stopping container: ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08 (timeout: 120s)" id=0ee36488-9972-4d22-9586-33b47fa4cf67 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.617138161Z" level=info msg="Stopping container: 7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d (timeout: 120s)" id=0c361208-bca5-4ec3-a4e3-e029f58fb5d1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.617309204Z" level=info msg="Stopping container: 31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d (timeout: 120s)" id=50f30a5e-efbb-4ff5-b262-ac49661b2bad name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.617097070Z" level=info msg="Stopping container: e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b (timeout: 120s)" id=bdf24bb7-e689-416d-b032-647cefdd0231 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.617641680Z" level=info msg="Stopping container: 771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba (timeout: 120s)" id=beede7b3-b6d4-45dc-b417-c7c27547a0cf name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d.scope: Consumed 134ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d.scope: Consumed 26ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08.scope: Consumed 1.635s CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 conmon[5783]: conmon e67243ab218ed23f074f : container 5797 exited with status 2 Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08.scope: Consumed 26ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d.scope: Consumed 59ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba.scope: Consumed 746ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c.scope: Consumed 128ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b.scope: Consumed 4.797s CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d.scope: Consumed 23ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba.scope: Consumed 24ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c.scope: Consumed 24ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b.scope: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: crio-conmon-e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b.scope: Consumed 23ms CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c792a9be0cf72bc39c17482f8dcd428e12346fda53d787cde86b7ee17676702c-merged.mount: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c792a9be0cf72bc39c17482f8dcd428e12346fda53d787cde86b7ee17676702c-merged.mount: Consumed 0 CPU time Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a4f528643efafc9e23577fa7a0b9b66579778e185c963490d9e14a01a1e610e3-merged.mount: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a4f528643efafc9e23577fa7a0b9b66579778e185c963490d9e14a01a1e610e3-merged.mount: Consumed 0 CPU time Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.826061989Z" level=info msg="Stopped container ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/thanos-query" id=0ee36488-9972-4d22-9586-33b47fa4cf67 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.826751259Z" level=info msg="Stopped container 7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy" id=0c361208-bca5-4ec3-a4e3-e029f58fb5d1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a6c9848df7ce375bb541b7d6f1a4b5ba78788fe68d0016a5943281e6b9a4d213-merged.mount: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a6c9848df7ce375bb541b7d6f1a4b5ba78788fe68d0016a5943281e6b9a4d213-merged.mount: Consumed 0 CPU time Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.836679067Z" level=info msg="Stopped container 771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-metrics" id=beede7b3-b6d4-45dc-b417-c7c27547a0cf name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-520351bec26ca571317f04ffc3706c2d45209dc5b27b06db8345c7221c3766e3-merged.mount: Succeeded. Feb 23 17:14:05 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-520351bec26ca571317f04ffc3706c2d45209dc5b27b06db8345c7221c3766e3-merged.mount: Consumed 0 CPU time Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.846838536Z" level=info msg="Stopped container 8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-rules" id=a410c638-b6d9-4dc8-9e0f-511ec886595f name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.857858909Z" level=info msg="Stopped container 31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/prom-label-proxy" id=50f30a5e-efbb-4ff5-b262-ac49661b2bad name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.872503187Z" level=info msg="Stopped container e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/oauth-proxy" id=bdf24bb7-e689-416d-b032-647cefdd0231 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.872801122Z" level=info msg="Stopping pod sandbox: 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6" id=fbe82a7a-7e2a-4684-8b2b-4f44d861483c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.873002891Z" level=info msg="Got pod network &{Name:thanos-querier-8654d9f96d-8g56r Namespace:openshift-monitoring ID:7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6 UID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 NetNS:/var/run/netns/f061bf1e-6ed2-4ea2-9db4-e04b14df9997 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:14:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:05.873102427Z" level=info msg="Deleting pod openshift-monitoring_thanos-querier-8654d9f96d-8g56r from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934081 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8654d9f96d-8g56r_a762f29d-1a7e-4d73-9c04-8d5fbbe65b32/oauth-proxy/0.log" Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934511 2112 generic.go:296] "Generic (PLEG): container finished" podID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerID="771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba" exitCode=0 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934533 2112 generic.go:296] "Generic (PLEG): container finished" podID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerID="8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c" exitCode=0 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934546 2112 generic.go:296] "Generic (PLEG): container finished" podID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerID="31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d" exitCode=0 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934558 2112 generic.go:296] "Generic (PLEG): container finished" podID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerID="7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d" exitCode=0 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934570 2112 generic.go:296] "Generic (PLEG): container finished" podID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerID="e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b" exitCode=2 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934583 2112 generic.go:296] "Generic (PLEG): container finished" podID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 containerID="ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08" exitCode=0 Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934630 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerDied Data:771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba} Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934673 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerDied Data:8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c} Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934690 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerDied Data:31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d} Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934746 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerDied Data:7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d} Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934764 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerDied Data:e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b} Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.934778 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerDied Data:ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08} Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.937278 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager/0.log" Feb 23 17:14:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:05.937325 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerStarted Data:2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6} Feb 23 17:14:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00325|bridge|INFO|bridge br-int: deleted interface 7dbdc33b9d16d63 on port 15 Feb 23 17:14:06 ip-10-0-136-68 kernel: device 7dbdc33b9d16d63 left promiscuous mode Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-aab4c782c9edd98964752a76ac05a3b40abd1bd8ab0096f207415b132fd1a38b-merged.mount: Succeeded. Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-aab4c782c9edd98964752a76ac05a3b40abd1bd8ab0096f207415b132fd1a38b-merged.mount: Consumed 0 CPU time Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-60110cbc0dfc62414a9d8d665d27ba66b342d020f395612fa4da82001fc474e6-merged.mount: Succeeded. Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-60110cbc0dfc62414a9d8d665d27ba66b342d020f395612fa4da82001fc474e6-merged.mount: Consumed 0 CPU time Feb 23 17:14:06 ip-10-0-136-68 crio[2062]: 2023-02-23T17:14:05Z [verbose] Del: openshift-monitoring:thanos-querier-8654d9f96d-8g56r:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:14:06 ip-10-0-136-68 crio[2062]: I0223 17:14:06.007860 59374 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9482663d7f6f134c67beb735a2fe72e31c21490620d28af32db185dd1965482b-merged.mount: Succeeded. Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9482663d7f6f134c67beb735a2fe72e31c21490620d28af32db185dd1965482b-merged.mount: Consumed 0 CPU time Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: run-utsns-f061bf1e\x2d6ed2\x2d4ea2\x2d9db4\x2de04b14df9997.mount: Succeeded. Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: run-utsns-f061bf1e\x2d6ed2\x2d4ea2\x2d9db4\x2de04b14df9997.mount: Consumed 0 CPU time Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: run-ipcns-f061bf1e\x2d6ed2\x2d4ea2\x2d9db4\x2de04b14df9997.mount: Succeeded. Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: run-ipcns-f061bf1e\x2d6ed2\x2d4ea2\x2d9db4\x2de04b14df9997.mount: Consumed 0 CPU time Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: run-netns-f061bf1e\x2d6ed2\x2d4ea2\x2d9db4\x2de04b14df9997.mount: Succeeded. Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: run-netns-f061bf1e\x2d6ed2\x2d4ea2\x2d9db4\x2de04b14df9997.mount: Consumed 0 CPU time Feb 23 17:14:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:06.593822600Z" level=info msg="Stopped pod sandbox: 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6" id=fbe82a7a-7e2a-4684-8b2b-4f44d861483c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.600097 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8654d9f96d-8g56r_a762f29d-1a7e-4d73-9c04-8d5fbbe65b32/oauth-proxy/0.log" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651176 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651213 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-tls\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651239 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lhh6\" (UniqueName: \"kubernetes.io/projected/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-kube-api-access-6lhh6\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651262 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-oauth-cookie\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651283 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-grpc-tls\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651304 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-metrics-client-ca\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651333 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-thanos-querier-trusted-ca-bundle\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651357 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.651374 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy\") pod \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\" (UID: \"a762f29d-1a7e-4d73-9c04-8d5fbbe65b32\") " Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:06.652347 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:06.652381 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32/volumes/kubernetes.io~configmap/thanos-querier-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.652817 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.653109 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-thanos-querier-trusted-ca-bundle" (OuterVolumeSpecName: "thanos-querier-trusted-ca-bundle") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "thanos-querier-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.659135 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-tls" (OuterVolumeSpecName: "secret-thanos-querier-tls") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "secret-thanos-querier-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.660012 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy" (OuterVolumeSpecName: "secret-thanos-querier-kube-rbac-proxy") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "secret-thanos-querier-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.662079 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-oauth-cookie" (OuterVolumeSpecName: "secret-thanos-querier-oauth-cookie") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "secret-thanos-querier-oauth-cookie". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.662496 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-rules" (OuterVolumeSpecName: "secret-thanos-querier-kube-rbac-proxy-rules") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "secret-thanos-querier-kube-rbac-proxy-rules". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.662526 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.662619 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-kube-api-access-6lhh6" (OuterVolumeSpecName: "kube-api-access-6lhh6") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "kube-api-access-6lhh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.663081 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-metrics" (OuterVolumeSpecName: "secret-thanos-querier-kube-rbac-proxy-metrics") pod "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" (UID: "a762f29d-1a7e-4d73-9c04-8d5fbbe65b32"). InnerVolumeSpecName "secret-thanos-querier-kube-rbac-proxy-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752278 2112 reconciler.go:399] "Volume detached for volume \"thanos-querier-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-thanos-querier-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752306 2112 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-rules\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752326 2112 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752342 2112 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-kube-rbac-proxy-metrics\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752359 2112 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752375 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-6lhh6\" (UniqueName: \"kubernetes.io/projected/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-kube-api-access-6lhh6\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752391 2112 reconciler.go:399] "Volume detached for volume \"secret-thanos-querier-oauth-cookie\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-thanos-querier-oauth-cookie\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752407 2112 reconciler.go:399] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-secret-grpc-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.752424 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.941500 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8654d9f96d-8g56r_a762f29d-1a7e-4d73-9c04-8d5fbbe65b32/oauth-proxy/0.log" Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.943740 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8654d9f96d-8g56r" event=&{ID:a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 Type:ContainerDied Data:7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6} Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.943782 2112 scope.go:115] "RemoveContainer" containerID="771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba" Feb 23 17:14:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:06.944893002Z" level=info msg="Removing container: 771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba" id=58895de8-c98e-47b4-942f-7379f453cdc1 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-poda762f29d_1a7e_4d73_9c04_8d5fbbe65b32.slice. Feb 23 17:14:06 ip-10-0-136-68 systemd[1]: kubepods-burstable-poda762f29d_1a7e_4d73_9c04_8d5fbbe65b32.slice: Consumed 7.651s CPU time Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.977907 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-8g56r] Feb 23 17:14:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:06.981850145Z" level=info msg="Removed container 771648ecb2bf501e026416582c42bb47961cbfece2bf483bcbdac980be42c9ba: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-metrics" id=58895de8-c98e-47b4-942f-7379f453cdc1 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.982044 2112 scope.go:115] "RemoveContainer" containerID="8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c" Feb 23 17:14:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:06.982779235Z" level=info msg="Removing container: 8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c" id=5313554c-4a88-43cb-963b-edec464456fa name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:06.989760 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/thanos-querier-8654d9f96d-8g56r] Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.003541408Z" level=info msg="Removed container 8fd1fd4af28c0cd056921011f6dde57e49a0e90ab488f6715d25ff3f7023241c: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy-rules" id=5313554c-4a88-43cb-963b-edec464456fa name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:07.003723 2112 scope.go:115] "RemoveContainer" containerID="31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d" Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.004370003Z" level=info msg="Removing container: 31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d" id=ab5f8b11-1a69-47e2-aa63-8a6d09e90d30 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.026504149Z" level=info msg="Removed container 31b1e1b6e270b361795df897e4bae03083c71b568cb3a2ef0f00a99c2d07e79d: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/prom-label-proxy" id=ab5f8b11-1a69-47e2-aa63-8a6d09e90d30 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:07.026713 2112 scope.go:115] "RemoveContainer" containerID="7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d" Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.027401983Z" level=info msg="Removing container: 7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d" id=60da12f0-91a9-4c1e-9572-b66fe27fcbb0 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.044427676Z" level=info msg="Removed container 7f0db4e5bbe607f5c8187e68c52009906ae8126f66c6ac604a83767da5e4a76d: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/kube-rbac-proxy" id=60da12f0-91a9-4c1e-9572-b66fe27fcbb0 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:07.044569 2112 scope.go:115] "RemoveContainer" containerID="e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b" Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.045144089Z" level=info msg="Removing container: e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b" id=8ff9675d-7a86-4e46-bbae-a7d5095cc6b8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.060107565Z" level=info msg="Removed container e67243ab218ed23f074f07b4b58bdc6cef13e3d5a051f61ad963652bef2d0a5b: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/oauth-proxy" id=8ff9675d-7a86-4e46-bbae-a7d5095cc6b8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:07.060255 2112 scope.go:115] "RemoveContainer" containerID="ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08" Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.060869101Z" level=info msg="Removing container: ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08" id=ab129584-e56a-401c-b1c2-3f454a68c69a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:07.077332598Z" level=info msg="Removed container ae4a8aeab6b939a04a77d3aca670cc5fd4377ee87669a63493fc130f44723b08: openshift-monitoring/thanos-querier-8654d9f96d-8g56r/thanos-query" id=ab129584-e56a-401c-b1c2-3f454a68c69a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6-userdata-shm.mount: Succeeded. Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6lhh6.mount: Succeeded. Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6lhh6.mount: Consumed 0 CPU time Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy\x2drules.mount: Succeeded. Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy\x2drules.mount: Consumed 0 CPU time Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dtls.mount: Succeeded. Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dtls.mount: Consumed 0 CPU time Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy\x2dmetrics.mount: Succeeded. Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy\x2dmetrics.mount: Consumed 0 CPU time Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Succeeded. Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dgrpc\x2dtls.mount: Consumed 0 CPU time Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2dkube\x2drbac\x2dproxy.mount: Consumed 0 CPU time Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2doauth\x2dcookie.mount: Succeeded. Feb 23 17:14:07 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a762f29d\x2d1a7e\x2d4d73\x2d9c04\x2d8d5fbbe65b32-volumes-kubernetes.io\x7esecret-secret\x2dthanos\x2dquerier\x2doauth\x2dcookie.mount: Consumed 0 CPU time Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:07.275549 2112 patch_prober.go:29] interesting pod/router-default-77f788594f-j5twb container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:14:07 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:07.275609 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-77f788594f-j5twb" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:14:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:08.119965 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a762f29d-1a7e-4d73-9c04-8d5fbbe65b32 path="/var/lib/kubelet/pods/a762f29d-1a7e-4d73-9c04-8d5fbbe65b32/volumes" Feb 23 17:14:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:09.747083798Z" level=warning msg="Found defunct process with PID 57044 (haproxy)" Feb 23 17:14:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:09.747512650Z" level=warning msg="Found defunct process with PID 58169 (haproxy)" Feb 23 17:14:11 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:11.201268 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:14:12 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00326|connmgr|INFO|br-ex<->unix#1165: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:14:12 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:12.698421 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-adapter-849c9bc779-55gw7] Feb 23 17:14:12 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:12.699188 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" podUID=bac56f54-5b00-421f-b735-a8a998208173 containerName="prometheus-adapter" containerID="cri-o://6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2" gracePeriod=30 Feb 23 17:14:12 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:12.699443678Z" level=info msg="Stopping container: 6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2 (timeout: 30s)" id=9be5a5c4-9a82-4e8c-8365-2b641df604fb name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:13 ip-10-0-136-68 systemd[1]: crio-6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2.scope: Succeeded. Feb 23 17:14:13 ip-10-0-136-68 systemd[1]: crio-6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2.scope: Consumed 8.772s CPU time Feb 23 17:14:13 ip-10-0-136-68 systemd[1]: crio-conmon-6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2.scope: Succeeded. Feb 23 17:14:13 ip-10-0-136-68 systemd[1]: crio-conmon-6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2.scope: Consumed 23ms CPU time Feb 23 17:14:13 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-07c5b62446e4f5da2057a29295c6dfb8b849d473b04e4e6f61f7e0c34585f8d9-merged.mount: Succeeded. Feb 23 17:14:13 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-07c5b62446e4f5da2057a29295c6dfb8b849d473b04e4e6f61f7e0c34585f8d9-merged.mount: Consumed 0 CPU time Feb 23 17:14:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:13.898788169Z" level=info msg="Stopped container 6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2: openshift-monitoring/prometheus-adapter-849c9bc779-55gw7/prometheus-adapter" id=9be5a5c4-9a82-4e8c-8365-2b641df604fb name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:13.899189365Z" level=info msg="Stopping pod sandbox: 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24" id=cc43d2f3-8f72-40bb-8885-b401374b81f8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:13.899400176Z" level=info msg="Got pod network &{Name:prometheus-adapter-849c9bc779-55gw7 Namespace:openshift-monitoring ID:94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24 UID:bac56f54-5b00-421f-b735-a8a998208173 NetNS:/var/run/netns/fa5f7fe0-b997-46ef-b3b4-8c0a981fd91c Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:14:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:13.899512840Z" level=info msg="Deleting pod openshift-monitoring_prometheus-adapter-849c9bc779-55gw7 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:14:13 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:13.958896 2112 generic.go:296] "Generic (PLEG): container finished" podID=bac56f54-5b00-421f-b735-a8a998208173 containerID="6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2" exitCode=0 Feb 23 17:14:13 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:13.958933 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" event=&{ID:bac56f54-5b00-421f-b735-a8a998208173 Type:ContainerDied Data:6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2} Feb 23 17:14:14 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00327|bridge|INFO|bridge br-int: deleted interface 94ff293ae607edb on port 14 Feb 23 17:14:14 ip-10-0-136-68 kernel: device 94ff293ae607edb left promiscuous mode Feb 23 17:14:14 ip-10-0-136-68 crio[2062]: 2023-02-23T17:14:13Z [verbose] Del: openshift-monitoring:prometheus-adapter-849c9bc779-55gw7:bac56f54-5b00-421f-b735-a8a998208173:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:14:14 ip-10-0-136-68 crio[2062]: I0223 17:14:14.033897 59576 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5c649f5fafcfc3e1a3701898ba3b65849df422fd90ea07511aa205a37b0192b1-merged.mount: Succeeded. Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5c649f5fafcfc3e1a3701898ba3b65849df422fd90ea07511aa205a37b0192b1-merged.mount: Consumed 0 CPU time Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: run-utsns-fa5f7fe0\x2db997\x2d46ef\x2db3b4\x2d8c0a981fd91c.mount: Succeeded. Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: run-utsns-fa5f7fe0\x2db997\x2d46ef\x2db3b4\x2d8c0a981fd91c.mount: Consumed 0 CPU time Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: run-ipcns-fa5f7fe0\x2db997\x2d46ef\x2db3b4\x2d8c0a981fd91c.mount: Succeeded. Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: run-ipcns-fa5f7fe0\x2db997\x2d46ef\x2db3b4\x2d8c0a981fd91c.mount: Consumed 0 CPU time Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: run-netns-fa5f7fe0\x2db997\x2d46ef\x2db3b4\x2d8c0a981fd91c.mount: Succeeded. Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: run-netns-fa5f7fe0\x2db997\x2d46ef\x2db3b4\x2d8c0a981fd91c.mount: Consumed 0 CPU time Feb 23 17:14:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:14.589810556Z" level=info msg="Stopped pod sandbox: 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24" id=cc43d2f3-8f72-40bb-8885-b401374b81f8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.705800 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-serving-certs-ca-bundle\") pod \"bac56f54-5b00-421f-b735-a8a998208173\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.706018 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls\" (UniqueName: \"kubernetes.io/secret/bac56f54-5b00-421f-b735-a8a998208173-tls\") pod \"bac56f54-5b00-421f-b735-a8a998208173\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:14.706087 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/bac56f54-5b00-421f-b735-a8a998208173/volumes/kubernetes.io~configmap/serving-certs-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.706316 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-serving-certs-ca-bundle" (OuterVolumeSpecName: "serving-certs-ca-bundle") pod "bac56f54-5b00-421f-b735-a8a998208173" (UID: "bac56f54-5b00-421f-b735-a8a998208173"). InnerVolumeSpecName "serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:14.706501 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/bac56f54-5b00-421f-b735-a8a998208173/volumes/kubernetes.io~configmap/prometheus-adapter-audit-profiles: clearQuota called, but quotas disabled Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.706813 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-audit-profiles" (OuterVolumeSpecName: "prometheus-adapter-audit-profiles") pod "bac56f54-5b00-421f-b735-a8a998208173" (UID: "bac56f54-5b00-421f-b735-a8a998208173"). InnerVolumeSpecName "prometheus-adapter-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.706977 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-adapter-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-audit-profiles\") pod \"bac56f54-5b00-421f-b735-a8a998208173\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.707022 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"prometheus-adapter-prometheus-config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-prometheus-config\") pod \"bac56f54-5b00-421f-b735-a8a998208173\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.707057 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-tmpfs\") pod \"bac56f54-5b00-421f-b735-a8a998208173\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.707083 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-config\") pod \"bac56f54-5b00-421f-b735-a8a998208173\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.707131 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fskbt\" (UniqueName: \"kubernetes.io/projected/bac56f54-5b00-421f-b735-a8a998208173-kube-api-access-fskbt\") pod \"bac56f54-5b00-421f-b735-a8a998208173\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.707161 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-audit-log\") pod \"bac56f54-5b00-421f-b735-a8a998208173\" (UID: \"bac56f54-5b00-421f-b735-a8a998208173\") " Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.707294 2112 reconciler.go:399] "Volume detached for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-serving-certs-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.707312 2112 reconciler.go:399] "Volume detached for volume \"prometheus-adapter-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-audit-profiles\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:14.707443 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/bac56f54-5b00-421f-b735-a8a998208173/volumes/kubernetes.io~empty-dir/audit-log: clearQuota called, but quotas disabled Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:14.707749 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/bac56f54-5b00-421f-b735-a8a998208173/volumes/kubernetes.io~configmap/prometheus-adapter-prometheus-config: clearQuota called, but quotas disabled Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.708013 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-prometheus-config" (OuterVolumeSpecName: "prometheus-adapter-prometheus-config") pod "bac56f54-5b00-421f-b735-a8a998208173" (UID: "bac56f54-5b00-421f-b735-a8a998208173"). InnerVolumeSpecName "prometheus-adapter-prometheus-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:14.708137 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/bac56f54-5b00-421f-b735-a8a998208173/volumes/kubernetes.io~configmap/config: clearQuota called, but quotas disabled Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.708327 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-config" (OuterVolumeSpecName: "config") pod "bac56f54-5b00-421f-b735-a8a998208173" (UID: "bac56f54-5b00-421f-b735-a8a998208173"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:14.708408 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/bac56f54-5b00-421f-b735-a8a998208173/volumes/kubernetes.io~empty-dir/tmpfs: clearQuota called, but quotas disabled Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.708468 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "bac56f54-5b00-421f-b735-a8a998208173" (UID: "bac56f54-5b00-421f-b735-a8a998208173"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.708561 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-audit-log" (OuterVolumeSpecName: "audit-log") pod "bac56f54-5b00-421f-b735-a8a998208173" (UID: "bac56f54-5b00-421f-b735-a8a998208173"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.714185 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac56f54-5b00-421f-b735-a8a998208173-tls" (OuterVolumeSpecName: "tls") pod "bac56f54-5b00-421f-b735-a8a998208173" (UID: "bac56f54-5b00-421f-b735-a8a998208173"). InnerVolumeSpecName "tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.718137 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac56f54-5b00-421f-b735-a8a998208173-kube-api-access-fskbt" (OuterVolumeSpecName: "kube-api-access-fskbt") pod "bac56f54-5b00-421f-b735-a8a998208173" (UID: "bac56f54-5b00-421f-b735-a8a998208173"). InnerVolumeSpecName "kube-api-access-fskbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.807481 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-fskbt\" (UniqueName: \"kubernetes.io/projected/bac56f54-5b00-421f-b735-a8a998208173-kube-api-access-fskbt\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.807509 2112 reconciler.go:399] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-audit-log\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.807521 2112 reconciler.go:399] "Volume detached for volume \"tls\" (UniqueName: \"kubernetes.io/secret/bac56f54-5b00-421f-b735-a8a998208173-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.807537 2112 reconciler.go:399] "Volume detached for volume \"prometheus-adapter-prometheus-config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-prometheus-adapter-prometheus-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.807548 2112 reconciler.go:399] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bac56f54-5b00-421f-b735-a8a998208173-tmpfs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.807560 2112 reconciler.go:399] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac56f54-5b00-421f-b735-a8a998208173-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24-userdata-shm.mount: Succeeded. Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-bac56f54\x2d5b00\x2d421f\x2db735\x2da8a998208173-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfskbt.mount: Succeeded. Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-bac56f54\x2d5b00\x2d421f\x2db735\x2da8a998208173-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfskbt.mount: Consumed 0 CPU time Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-bac56f54\x2d5b00\x2d421f\x2db735\x2da8a998208173-volumes-kubernetes.io\x7esecret-tls.mount: Succeeded. Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-bac56f54\x2d5b00\x2d421f\x2db735\x2da8a998208173-volumes-kubernetes.io\x7esecret-tls.mount: Consumed 0 CPU time Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.962537 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-adapter-849c9bc779-55gw7" event=&{ID:bac56f54-5b00-421f-b735-a8a998208173 Type:ContainerDied Data:94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24} Feb 23 17:14:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:14.962578 2112 scope.go:115] "RemoveContainer" containerID="6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2" Feb 23 17:14:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:14.963903734Z" level=info msg="Removing container: 6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2" id=5445a0c9-dce6-44f9-838e-a1973d021a4b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podbac56f54_5b00_421f_b735_a8a998208173.slice. Feb 23 17:14:14 ip-10-0-136-68 systemd[1]: kubepods-burstable-podbac56f54_5b00_421f_b735_a8a998208173.slice: Consumed 8.795s CPU time Feb 23 17:14:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:14.989966474Z" level=info msg="Removed container 6450b56b1d86f9c7d7d8379edcd27bea349ea883d1dec47ef862b973eee94da2: openshift-monitoring/prometheus-adapter-849c9bc779-55gw7/prometheus-adapter" id=5445a0c9-dce6-44f9-838e-a1973d021a4b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:15.016080 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/prometheus-adapter-849c9bc779-55gw7] Feb 23 17:14:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:15.022118 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/prometheus-adapter-849c9bc779-55gw7] Feb 23 17:14:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:16.120089 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bac56f54-5b00-421f-b735-a8a998208173 path="/var/lib/kubelet/pods/bac56f54-5b00-421f-b735-a8a998208173/volumes" Feb 23 17:14:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:16.888028 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-8klpv] Feb 23 17:14:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:17.275523 2112 patch_prober.go:29] interesting pod/router-default-77f788594f-j5twb container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:14:17 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:14:17 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:14:17 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:14:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:17.275582 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-77f788594f-j5twb" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:14:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:17.275649 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-77f788594f-j5twb" Feb 23 17:14:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:17.736194540Z" level=warning msg="Found defunct process with PID 59172 (haproxy)" Feb 23 17:14:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:17.736270578Z" level=warning msg="Found defunct process with PID 59496 (haproxy)" Feb 23 17:14:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:24.577151 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/node-exporter-hw8fk] Feb 23 17:14:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:24.577328 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/node-exporter-hw8fk" podUID=75f4efab-251e-4aa5-97d6-4a2a27025ae1 containerName="node-exporter" containerID="cri-o://39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264" gracePeriod=30 Feb 23 17:14:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:24.577419 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/node-exporter-hw8fk" podUID=75f4efab-251e-4aa5-97d6-4a2a27025ae1 containerName="kube-rbac-proxy" containerID="cri-o://1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9" gracePeriod=30 Feb 23 17:14:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:24.577676040Z" level=info msg="Stopping container: 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9 (timeout: 30s)" id=ec7c4455-e669-4561-8481-c4ebeea8a8e0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:24.577678508Z" level=info msg="Stopping container: 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264 (timeout: 30s)" id=2f6e1431-29f6-4714-bd45-c5be70f48ba8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:24 ip-10-0-136-68 conmon[2759]: conmon 39c966c2b9901dfa66a7 : container 2773 exited with status 143 Feb 23 17:14:24 ip-10-0-136-68 systemd[1]: crio-conmon-39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264.scope: Succeeded. Feb 23 17:14:24 ip-10-0-136-68 systemd[1]: crio-conmon-39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264.scope: Consumed 28ms CPU time Feb 23 17:14:24 ip-10-0-136-68 systemd[1]: crio-39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264.scope: Succeeded. Feb 23 17:14:24 ip-10-0-136-68 systemd[1]: crio-39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264.scope: Consumed 7.577s CPU time Feb 23 17:14:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c728f8204598702303ca343ce9e3ba33f4162ea383a93271807cb6251e21aa30-merged.mount: Succeeded. Feb 23 17:14:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c728f8204598702303ca343ce9e3ba33f4162ea383a93271807cb6251e21aa30-merged.mount: Consumed 0 CPU time Feb 23 17:14:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:24.771010865Z" level=info msg="Stopped container 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264: openshift-monitoring/node-exporter-hw8fk/node-exporter" id=2f6e1431-29f6-4714-bd45-c5be70f48ba8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:24.984502 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-hw8fk_75f4efab-251e-4aa5-97d6-4a2a27025ae1/node-exporter/1.log" Feb 23 17:14:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:24.984812 2112 generic.go:296] "Generic (PLEG): container finished" podID=75f4efab-251e-4aa5-97d6-4a2a27025ae1 containerID="39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264" exitCode=143 Feb 23 17:14:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:24.984843 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerDied Data:39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264} Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: crio-1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9.scope: Succeeded. Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: crio-1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9.scope: Consumed 916ms CPU time Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: crio-conmon-1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9.scope: Succeeded. Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: crio-conmon-1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9.scope: Consumed 23ms CPU time Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0089e37985692e8f66e12b6a4ed1548904177b9c40d6bcf18e88010b0733a9c5-merged.mount: Succeeded. Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0089e37985692e8f66e12b6a4ed1548904177b9c40d6bcf18e88010b0733a9c5-merged.mount: Consumed 0 CPU time Feb 23 17:14:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:25.768930626Z" level=info msg="Stopped container 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9: openshift-monitoring/node-exporter-hw8fk/kube-rbac-proxy" id=ec7c4455-e669-4561-8481-c4ebeea8a8e0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:25.769334941Z" level=info msg="Stopping pod sandbox: fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f" id=41b7a83a-2929-4070-b8f1-77dddec2e0e1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ee99865fb70293b13030203c2129748466b5b41de9e9b5c215019d2c015fd9f4-merged.mount: Succeeded. Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ee99865fb70293b13030203c2129748466b5b41de9e9b5c215019d2c015fd9f4-merged.mount: Consumed 0 CPU time Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: run-utsns-dafd5ec9\x2d517b\x2d4de6\x2dacbd\x2d5acc10b7d908.mount: Succeeded. Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: run-utsns-dafd5ec9\x2d517b\x2d4de6\x2dacbd\x2d5acc10b7d908.mount: Consumed 0 CPU time Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: run-ipcns-dafd5ec9\x2d517b\x2d4de6\x2dacbd\x2d5acc10b7d908.mount: Succeeded. Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: run-ipcns-dafd5ec9\x2d517b\x2d4de6\x2dacbd\x2d5acc10b7d908.mount: Consumed 0 CPU time Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: run-netns-dafd5ec9\x2d517b\x2d4de6\x2dacbd\x2d5acc10b7d908.mount: Succeeded. Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: run-netns-dafd5ec9\x2d517b\x2d4de6\x2dacbd\x2d5acc10b7d908.mount: Consumed 0 CPU time Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f-userdata-shm.mount: Succeeded. Feb 23 17:14:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:14:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:25.852699732Z" level=info msg="Stopped pod sandbox: fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f" id=41b7a83a-2929-4070-b8f1-77dddec2e0e1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.858261 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-hw8fk_75f4efab-251e-4aa5-97d6-4a2a27025ae1/node-exporter/1.log" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978122 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root\") pod \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978162 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys\") pod \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978196 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile\") pod \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978227 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls\") pod \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978244 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root" (OuterVolumeSpecName: "root") pod "75f4efab-251e-4aa5-97d6-4a2a27025ae1" (UID: "75f4efab-251e-4aa5-97d6-4a2a27025ae1"). InnerVolumeSpecName "root". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978259 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config\") pod \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978317 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdk85\" (UniqueName: \"kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85\") pod \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978348 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp\") pod \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978386 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca\") pod \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\" (UID: \"75f4efab-251e-4aa5-97d6-4a2a27025ae1\") " Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978383 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys" (OuterVolumeSpecName: "sys") pod "75f4efab-251e-4aa5-97d6-4a2a27025ae1" (UID: "75f4efab-251e-4aa5-97d6-4a2a27025ae1"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:25.978422 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes/kubernetes.io~empty-dir/node-exporter-textfile: clearQuota called, but quotas disabled Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978484 2112 reconciler.go:399] "Volume detached for volume \"root\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-root\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978494 2112 reconciler.go:399] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-sys\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978614 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile" (OuterVolumeSpecName: "node-exporter-textfile") pod "75f4efab-251e-4aa5-97d6-4a2a27025ae1" (UID: "75f4efab-251e-4aa5-97d6-4a2a27025ae1"). InnerVolumeSpecName "node-exporter-textfile". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:25.978911 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.978961 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp" (OuterVolumeSpecName: "node-exporter-wtmp") pod "75f4efab-251e-4aa5-97d6-4a2a27025ae1" (UID: "75f4efab-251e-4aa5-97d6-4a2a27025ae1"). InnerVolumeSpecName "node-exporter-wtmp". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.979260 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "75f4efab-251e-4aa5-97d6-4a2a27025ae1" (UID: "75f4efab-251e-4aa5-97d6-4a2a27025ae1"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.987596 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-hw8fk_75f4efab-251e-4aa5-97d6-4a2a27025ae1/node-exporter/1.log" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.987874 2112 generic.go:296] "Generic (PLEG): container finished" podID=75f4efab-251e-4aa5-97d6-4a2a27025ae1 containerID="1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9" exitCode=0 Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.987907 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerDied Data:1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9} Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.987930 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hw8fk" event=&{ID:75f4efab-251e-4aa5-97d6-4a2a27025ae1 Type:ContainerDied Data:fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f} Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.987948 2112 scope.go:115] "RemoveContainer" containerID="1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9" Feb 23 17:14:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:25.988590187Z" level=info msg="Removing container: 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9" id=2ed0a015-3f4d-48db-9df1-b459e798626a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.991959 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85" (OuterVolumeSpecName: "kube-api-access-vdk85") pod "75f4efab-251e-4aa5-97d6-4a2a27025ae1" (UID: "75f4efab-251e-4aa5-97d6-4a2a27025ae1"). InnerVolumeSpecName "kube-api-access-vdk85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.991963 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config" (OuterVolumeSpecName: "node-exporter-kube-rbac-proxy-config") pod "75f4efab-251e-4aa5-97d6-4a2a27025ae1" (UID: "75f4efab-251e-4aa5-97d6-4a2a27025ae1"). InnerVolumeSpecName "node-exporter-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:25.996825 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls" (OuterVolumeSpecName: "node-exporter-tls") pod "75f4efab-251e-4aa5-97d6-4a2a27025ae1" (UID: "75f4efab-251e-4aa5-97d6-4a2a27025ae1"). InnerVolumeSpecName "node-exporter-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.005726772Z" level=info msg="Removed container 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9: openshift-monitoring/node-exporter-hw8fk/kube-rbac-proxy" id=2ed0a015-3f4d-48db-9df1-b459e798626a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.005866 2112 scope.go:115] "RemoveContainer" containerID="39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264" Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.006423578Z" level=info msg="Removing container: 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264" id=34346b87-c1dc-43e4-98c8-e2957596288f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.022529365Z" level=info msg="Removed container 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264: openshift-monitoring/node-exporter-hw8fk/node-exporter" id=34346b87-c1dc-43e4-98c8-e2957596288f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.022712 2112 scope.go:115] "RemoveContainer" containerID="9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677" Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.023267267Z" level=info msg="Removing container: 9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677" id=26058971-42c6-4459-9800-5f848dfb2424 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.068564358Z" level=info msg="Removed container 9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677: openshift-monitoring/node-exporter-hw8fk/init-textfile" id=26058971-42c6-4459-9800-5f848dfb2424 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.068745 2112 scope.go:115] "RemoveContainer" containerID="1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.068996 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9\": container with ID starting with 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9 not found: ID does not exist" containerID="1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.069031 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9} err="failed to get container status \"1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9\": rpc error: code = NotFound desc = could not find container \"1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9\": container with ID starting with 1b1fd2d7c980913cf34678762e75665a0298ae69ef49307892b1493411a99cb9 not found: ID does not exist" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.069044 2112 scope.go:115] "RemoveContainer" containerID="39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.069230 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264\": container with ID starting with 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264 not found: ID does not exist" containerID="39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.069255 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264} err="failed to get container status \"39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264\": rpc error: code = NotFound desc = could not find container \"39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264\": container with ID starting with 39c966c2b9901dfa66a71f09c318c0e2b4aac4dea64a53d5174a833226e9f264 not found: ID does not exist" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.069267 2112 scope.go:115] "RemoveContainer" containerID="9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.069412 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677\": container with ID starting with 9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677 not found: ID does not exist" containerID="9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.069431 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677} err="failed to get container status \"9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677\": rpc error: code = NotFound desc = could not find container \"9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677\": container with ID starting with 9a2340ebf2c2cbf322c44158beaccb45f3150a0f8d09561dce49e94b6b7d2677 not found: ID does not exist" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.078798 2112 reconciler.go:399] "Volume detached for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-textfile\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.078822 2112 reconciler.go:399] "Volume detached for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.078833 2112 reconciler.go:399] "Volume detached for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-kube-rbac-proxy-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.078842 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-vdk85\" (UniqueName: \"kubernetes.io/projected/75f4efab-251e-4aa5-97d6-4a2a27025ae1-kube-api-access-vdk85\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.078850 2112 reconciler.go:399] "Volume detached for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/75f4efab-251e-4aa5-97d6-4a2a27025ae1-node-exporter-wtmp\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.078859 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75f4efab-251e-4aa5-97d6-4a2a27025ae1-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod75f4efab_251e_4aa5_97d6_4a2a27025ae1.slice. Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod75f4efab_251e_4aa5_97d6_4a2a27025ae1.slice: Consumed 8.680s CPU time Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.305455 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/node-exporter-hw8fk] Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.317344 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/node-exporter-hw8fk] Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328521 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-monitoring/node-exporter-nt8h7] Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328557 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328615 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75f4efab-251e-4aa5-97d6-4a2a27025ae1" containerName="kube-rbac-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328626 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f4efab-251e-4aa5-97d6-4a2a27025ae1" containerName="kube-rbac-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328638 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bac56f54-5b00-421f-b735-a8a998208173" containerName="prometheus-adapter" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328645 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac56f54-5b00-421f-b735-a8a998208173" containerName="prometheus-adapter" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328654 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75f4efab-251e-4aa5-97d6-4a2a27025ae1" containerName="init-textfile" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328690 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f4efab-251e-4aa5-97d6-4a2a27025ae1" containerName="init-textfile" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328702 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="prom-label-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328710 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="prom-label-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328719 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy-rules" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328727 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy-rules" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328738 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="oauth-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328745 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="oauth-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328755 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75f4efab-251e-4aa5-97d6-4a2a27025ae1" containerName="node-exporter" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328763 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f4efab-251e-4aa5-97d6-4a2a27025ae1" containerName="node-exporter" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328772 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy-metrics" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328780 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy-metrics" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328790 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328799 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:26.328808 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="thanos-query" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328816 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="thanos-query" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328864 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328875 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="oauth-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328885 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="thanos-query" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328895 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="75f4efab-251e-4aa5-97d6-4a2a27025ae1" containerName="kube-rbac-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328903 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy-metrics" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328911 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="75f4efab-251e-4aa5-97d6-4a2a27025ae1" containerName="node-exporter" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328919 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="kube-rbac-proxy-rules" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328929 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="a762f29d-1a7e-4d73-9c04-8d5fbbe65b32" containerName="prom-label-proxy" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.328939 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="bac56f54-5b00-421f-b735-a8a998208173" containerName="prometheus-adapter" Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod3e3e7655_5c60_4995_9a23_b32843026a6e.slice. Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.482782 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.482814 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-root\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.482836 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-tls\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.482979 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-wtmp\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.483023 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e3e7655-5c60-4995-9a23-b32843026a6e-metrics-client-ca\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.483070 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-sys\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.483131 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2x89\" (UniqueName: \"kubernetes.io/projected/3e3e7655-5c60-4995-9a23-b32843026a6e-kube-api-access-p2x89\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.483192 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-textfile\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584159 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-p2x89\" (UniqueName: \"kubernetes.io/projected/3e3e7655-5c60-4995-9a23-b32843026a6e-kube-api-access-p2x89\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584205 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-textfile\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584239 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584268 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-root\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584296 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-tls\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584328 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-wtmp\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584369 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e3e7655-5c60-4995-9a23-b32843026a6e-metrics-client-ca\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584395 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-sys\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584452 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-sys\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584495 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-root\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584778 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-wtmp\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.584993 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-textfile\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.585225 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e3e7655-5c60-4995-9a23-b32843026a6e-metrics-client-ca\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.586497 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.586986 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-tls\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.602188 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2x89\" (UniqueName: \"kubernetes.io/projected/3e3e7655-5c60-4995-9a23-b32843026a6e-kube-api-access-p2x89\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.641344 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.641812946Z" level=info msg="Running pod sandbox: openshift-monitoring/node-exporter-nt8h7/POD" id=ef5f4227-2400-41ed-b7b8-723c6e57bcd9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.641875440Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.659555206Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=ef5f4227-2400-41ed-b7b8-723c6e57bcd9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:26.662543 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e3e7655_5c60_4995_9a23_b32843026a6e.slice/crio-cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092.scope WatchSource:0}: Error finding container cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092: Status 404 returned error can't find the container with id cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092 Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.664005514Z" level=info msg="Ran pod sandbox cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092 with infra container: openshift-monitoring/node-exporter-nt8h7/POD" id=ef5f4227-2400-41ed-b7b8-723c6e57bcd9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.664848581Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=967010b1-72f5-49f9-8b7a-b76d5287a52c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.665014792Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9 not found" id=967010b1-72f5-49f9-8b7a-b76d5287a52c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.665503394Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=19167012-c3b7-46fe-8737-6a48e34ef305 name=/runtime.v1.ImageService/PullImage Feb 23 17:14:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:26.667971602Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9\"" Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5e2ee9b33591da29adf79c5c656411274519a327ee15b73fde65f80266e56b97-merged.mount: Succeeded. Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5e2ee9b33591da29adf79c5c656411274519a327ee15b73fde65f80266e56b97-merged.mount: Consumed 0 CPU time Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvdk85.mount: Succeeded. Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvdk85.mount: Consumed 0 CPU time Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dtls.mount: Succeeded. Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dtls.mount: Consumed 0 CPU time Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Succeeded. Feb 23 17:14:26 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-75f4efab\x2d251e\x2d4aa5\x2d97d6\x2d4a2a27025ae1-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Consumed 0 CPU time Feb 23 17:14:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:26.990186 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nt8h7" event=&{ID:3e3e7655-5c60-4995-9a23-b32843026a6e Type:ContainerStarted Data:cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092} Feb 23 17:14:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:27.275949 2112 patch_prober.go:29] interesting pod/router-default-77f788594f-j5twb container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:14:27 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:14:27 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:14:27 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:14:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:27.276002 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-77f788594f-j5twb" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:14:27 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00328|connmgr|INFO|br-ex<->unix#1173: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:14:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:27.859771201Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9\"" Feb 23 17:14:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:28.120243 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=75f4efab-251e-4aa5-97d6-4a2a27025ae1 path="/var/lib/kubelet/pods/75f4efab-251e-4aa5-97d6-4a2a27025ae1/volumes" Feb 23 17:14:29 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.Bm3NvO.mount: Succeeded. Feb 23 17:14:31 ip-10-0-136-68 systemd[1]: run-runc-2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6-runc.YXuK6e.mount: Succeeded. Feb 23 17:14:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:31.359751 2112 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/alertmanager-main-1" Feb 23 17:14:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:31.682841218Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=19167012-c3b7-46fe-8737-6a48e34ef305 name=/runtime.v1.ImageService/PullImage Feb 23 17:14:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:31.683619717Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=eab82519-eb56-48b8-8b48-a7fd00b1cd12 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:31.684920179Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:f53384a648c59be4fc6721c4809654cf2c3c49e25c4890941d180079b86f24a0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9],Size_:371847935,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=eab82519-eb56-48b8-8b48-a7fd00b1cd12 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:31.685576820Z" level=info msg="Creating container: openshift-monitoring/node-exporter-nt8h7/init-textfile" id=c65c2826-3027-4224-a3b4-45d8a433c5e0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:31.685654259Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:14:31 ip-10-0-136-68 systemd[1]: Started crio-conmon-8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c.scope. Feb 23 17:14:31 ip-10-0-136-68 systemd[1]: Started libcontainer container 8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c. Feb 23 17:14:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:31.781541191Z" level=info msg="Created container 8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c: openshift-monitoring/node-exporter-nt8h7/init-textfile" id=c65c2826-3027-4224-a3b4-45d8a433c5e0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:31.782012928Z" level=info msg="Starting container: 8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c" id=04227a2d-322f-4a7a-a888-899d74d858f8 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:14:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:31.802076403Z" level=info msg="Started container" PID=60061 containerID=8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c description=openshift-monitoring/node-exporter-nt8h7/init-textfile id=04227a2d-322f-4a7a-a888-899d74d858f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092 Feb 23 17:14:31 ip-10-0-136-68 systemd[1]: crio-8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c.scope: Succeeded. Feb 23 17:14:31 ip-10-0-136-68 systemd[1]: crio-8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c.scope: Consumed 83ms CPU time Feb 23 17:14:31 ip-10-0-136-68 systemd[1]: crio-conmon-8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c.scope: Succeeded. Feb 23 17:14:31 ip-10-0-136-68 systemd[1]: crio-conmon-8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c.scope: Consumed 23ms CPU time Feb 23 17:14:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:32.001550 2112 generic.go:296] "Generic (PLEG): container finished" podID=3e3e7655-5c60-4995-9a23-b32843026a6e containerID="8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c" exitCode=0 Feb 23 17:14:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:32.001589 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nt8h7" event=&{ID:3e3e7655-5c60-4995-9a23-b32843026a6e Type:ContainerDied Data:8450a8d992fb768534a09e92df3770588b151ea12099510a07818410a2603c9c} Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.002145086Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=24799a87-4c89-4d14-9e64-daa155d0ad00 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.003504851Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:f53384a648c59be4fc6721c4809654cf2c3c49e25c4890941d180079b86f24a0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9],Size_:371847935,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=24799a87-4c89-4d14-9e64-daa155d0ad00 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.004860301Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=4e90441c-3db8-4543-bb5c-320731ba893d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.006179522Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:f53384a648c59be4fc6721c4809654cf2c3c49e25c4890941d180079b86f24a0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9],Size_:371847935,Uid:nil,Username:nobody,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=4e90441c-3db8-4543-bb5c-320731ba893d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.006814870Z" level=info msg="Creating container: openshift-monitoring/node-exporter-nt8h7/node-exporter" id=4578a8d0-90ab-4350-a29c-f394822ff380 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.006907438Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:14:32 ip-10-0-136-68 systemd[1]: Started crio-conmon-7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4.scope. Feb 23 17:14:32 ip-10-0-136-68 systemd[1]: Started libcontainer container 7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4. Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.116044112Z" level=info msg="Created container 7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4: openshift-monitoring/node-exporter-nt8h7/node-exporter" id=4578a8d0-90ab-4350-a29c-f394822ff380 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.116504460Z" level=info msg="Starting container: 7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4" id=cb7aa41d-d5af-4727-a489-d2ecc969b8cd name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.123789197Z" level=info msg="Started container" PID=60182 containerID=7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4 description=openshift-monitoring/node-exporter-nt8h7/node-exporter id=cb7aa41d-d5af-4727-a489-d2ecc969b8cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092 Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.132027323Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=f0cff6b6-84d9-4023-83ce-f1620ae9f5d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.132179280Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f0cff6b6-84d9-4023-83ce-f1620ae9f5d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.132815614Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=e5d5dbf3-bd99-40d5-8075-dd16422bba4e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.132954480Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e5d5dbf3-bd99-40d5-8075-dd16422bba4e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.133733050Z" level=info msg="Creating container: openshift-monitoring/node-exporter-nt8h7/kube-rbac-proxy" id=21f1da02-b33e-4870-a81d-a5f272fc5e41 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.133819106Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:14:32 ip-10-0-136-68 systemd[1]: Started crio-conmon-fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732.scope. Feb 23 17:14:32 ip-10-0-136-68 systemd[1]: Started libcontainer container fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732. Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.232090576Z" level=info msg="Created container fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732: openshift-monitoring/node-exporter-nt8h7/kube-rbac-proxy" id=21f1da02-b33e-4870-a81d-a5f272fc5e41 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.232561127Z" level=info msg="Starting container: fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732" id=33171277-11d8-436d-abdc-ab3a7e1db90f name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:14:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:32.240175898Z" level=info msg="Started container" PID=60225 containerID=fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732 description=openshift-monitoring/node-exporter-nt8h7/kube-rbac-proxy id=33171277-11d8-436d-abdc-ab3a7e1db90f name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092 Feb 23 17:14:33 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:33.004384 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nt8h7" event=&{ID:3e3e7655-5c60-4995-9a23-b32843026a6e Type:ContainerStarted Data:fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732} Feb 23 17:14:33 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:33.004414 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nt8h7" event=&{ID:3e3e7655-5c60-4995-9a23-b32843026a6e Type:ContainerStarted Data:7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4} Feb 23 17:14:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:37.275358 2112 patch_prober.go:29] interesting pod/router-default-77f788594f-j5twb container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Feb 23 17:14:37 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:14:37 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:14:37 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:14:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:37.275429 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-77f788594f-j5twb" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:14:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:39.750887489Z" level=warning msg="Found defunct process with PID 59723 (haproxy)" Feb 23 17:14:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:41.958158 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" podUID=84367d42-9f7a-49fb-9aab-aa7bc958829f containerName="registry" containerID="cri-o://d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" gracePeriod=55 Feb 23 17:14:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:41.958376966Z" level=info msg="Stopping container: d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd (timeout: 55s)" id=404a7d19-5111-465d-9f4b-79210c72b7fd name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:41 ip-10-0-136-68 systemd[1]: crio-d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd.scope: Succeeded. Feb 23 17:14:41 ip-10-0-136-68 systemd[1]: crio-d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd.scope: Consumed 3.548s CPU time Feb 23 17:14:41 ip-10-0-136-68 systemd[1]: crio-conmon-d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd.scope: Succeeded. Feb 23 17:14:41 ip-10-0-136-68 systemd[1]: crio-conmon-d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd.scope: Consumed 76ms CPU time Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5d2f7bb82816d0f4cd4907128106730e10b6fafbdd7030075763d7cb2c0a4e41-merged.mount: Succeeded. Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5d2f7bb82816d0f4cd4907128106730e10b6fafbdd7030075763d7cb2c0a4e41-merged.mount: Consumed 0 CPU time Feb 23 17:14:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:42.154114815Z" level=info msg="Stopped container d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd: openshift-image-registry/image-registry-5f79c9c848-8klpv/registry" id=404a7d19-5111-465d-9f4b-79210c72b7fd name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:42.154349809Z" level=info msg="Stopping pod sandbox: d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552" id=f8f6d80e-22ae-4910-b936-775dc847b0fb name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:42.154538870Z" level=info msg="Got pod network &{Name:image-registry-5f79c9c848-8klpv Namespace:openshift-image-registry ID:d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552 UID:84367d42-9f7a-49fb-9aab-aa7bc958829f NetNS:/var/run/netns/17a4187f-d070-4d4c-b6a9-e7b96c85befa Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:14:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:42.154673199Z" level=info msg="Deleting pod openshift-image-registry_image-registry-5f79c9c848-8klpv from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:14:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00329|bridge|INFO|bridge br-int: deleted interface d4785ceb1d1a738 on port 17 Feb 23 17:14:42 ip-10-0-136-68 kernel: device d4785ceb1d1a738 left promiscuous mode Feb 23 17:14:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00330|connmgr|INFO|br-ex<->unix#1178: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:14:42 ip-10-0-136-68 crio[2062]: 2023-02-23T17:14:42Z [verbose] Del: openshift-image-registry:image-registry-5f79c9c848-8klpv:84367d42-9f7a-49fb-9aab-aa7bc958829f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:14:42 ip-10-0-136-68 crio[2062]: I0223 17:14:42.295880 60435 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-553818f6d06e3d878bbfcc6275c2e1ad711efd9e7820534942ae0197c7f30fa0-merged.mount: Succeeded. Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-553818f6d06e3d878bbfcc6275c2e1ad711efd9e7820534942ae0197c7f30fa0-merged.mount: Consumed 0 CPU time Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: run-utsns-17a4187f\x2dd070\x2d4d4c\x2db6a9\x2de7b96c85befa.mount: Succeeded. Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: run-utsns-17a4187f\x2dd070\x2d4d4c\x2db6a9\x2de7b96c85befa.mount: Consumed 0 CPU time Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: run-ipcns-17a4187f\x2dd070\x2d4d4c\x2db6a9\x2de7b96c85befa.mount: Succeeded. Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: run-ipcns-17a4187f\x2dd070\x2d4d4c\x2db6a9\x2de7b96c85befa.mount: Consumed 0 CPU time Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: run-netns-17a4187f\x2dd070\x2d4d4c\x2db6a9\x2de7b96c85befa.mount: Succeeded. Feb 23 17:14:42 ip-10-0-136-68 systemd[1]: run-netns-17a4187f\x2dd070\x2d4d4c\x2db6a9\x2de7b96c85befa.mount: Consumed 0 CPU time Feb 23 17:14:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:42.818791341Z" level=info msg="Stopped pod sandbox: d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552" id=f8f6d80e-22ae-4910-b936-775dc847b0fb name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:42 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:42.904376 2112 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.022352 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-trusted-ca\") pod \"84367d42-9f7a-49fb-9aab-aa7bc958829f\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.022415 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjhwd\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-kube-api-access-sjhwd\") pod \"84367d42-9f7a-49fb-9aab-aa7bc958829f\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.022452 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-image-registry-private-configuration\") pod \"84367d42-9f7a-49fb-9aab-aa7bc958829f\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.022487 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84367d42-9f7a-49fb-9aab-aa7bc958829f-ca-trust-extracted\") pod \"84367d42-9f7a-49fb-9aab-aa7bc958829f\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.022515 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-certificates\") pod \"84367d42-9f7a-49fb-9aab-aa7bc958829f\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.022547 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-installation-pull-secrets\") pod \"84367d42-9f7a-49fb-9aab-aa7bc958829f\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.022581 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-tls\") pod \"84367d42-9f7a-49fb-9aab-aa7bc958829f\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.022607 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-bound-sa-token\") pod \"84367d42-9f7a-49fb-9aab-aa7bc958829f\" (UID: \"84367d42-9f7a-49fb-9aab-aa7bc958829f\") " Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:43.022696 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/84367d42-9f7a-49fb-9aab-aa7bc958829f/volumes/kubernetes.io~configmap/trusted-ca: clearQuota called, but quotas disabled Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.023011 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "84367d42-9f7a-49fb-9aab-aa7bc958829f" (UID: "84367d42-9f7a-49fb-9aab-aa7bc958829f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:43.023489 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/84367d42-9f7a-49fb-9aab-aa7bc958829f/volumes/kubernetes.io~empty-dir/ca-trust-extracted: clearQuota called, but quotas disabled Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.024156 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84367d42-9f7a-49fb-9aab-aa7bc958829f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "84367d42-9f7a-49fb-9aab-aa7bc958829f" (UID: "84367d42-9f7a-49fb-9aab-aa7bc958829f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.024318 2112 reconciler.go:399] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-trusted-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.024340 2112 reconciler.go:399] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84367d42-9f7a-49fb-9aab-aa7bc958829f-ca-trust-extracted\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:43.024995 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/84367d42-9f7a-49fb-9aab-aa7bc958829f/volumes/kubernetes.io~configmap/registry-certificates: clearQuota called, but quotas disabled Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.025250 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "84367d42-9f7a-49fb-9aab-aa7bc958829f" (UID: "84367d42-9f7a-49fb-9aab-aa7bc958829f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.029001 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "84367d42-9f7a-49fb-9aab-aa7bc958829f" (UID: "84367d42-9f7a-49fb-9aab-aa7bc958829f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.029150 2112 generic.go:296] "Generic (PLEG): container finished" podID=84367d42-9f7a-49fb-9aab-aa7bc958829f containerID="d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" exitCode=0 Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.029183 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" event=&{ID:84367d42-9f7a-49fb-9aab-aa7bc958829f Type:ContainerDied Data:d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd} Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.029208 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" event=&{ID:84367d42-9f7a-49fb-9aab-aa7bc958829f Type:ContainerDied Data:d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552} Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.029227 2112 scope.go:115] "RemoveContainer" containerID="d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.032172 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-kube-api-access-sjhwd" (OuterVolumeSpecName: "kube-api-access-sjhwd") pod "84367d42-9f7a-49fb-9aab-aa7bc958829f" (UID: "84367d42-9f7a-49fb-9aab-aa7bc958829f"). InnerVolumeSpecName "kube-api-access-sjhwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.032975 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "84367d42-9f7a-49fb-9aab-aa7bc958829f" (UID: "84367d42-9f7a-49fb-9aab-aa7bc958829f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:14:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:43.033225755Z" level=info msg="Removing container: d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" id=caa50359-53f0-4c20-ba68-3f85980707a8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.036057 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "84367d42-9f7a-49fb-9aab-aa7bc958829f" (UID: "84367d42-9f7a-49fb-9aab-aa7bc958829f"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.045499 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "84367d42-9f7a-49fb-9aab-aa7bc958829f" (UID: "84367d42-9f7a-49fb-9aab-aa7bc958829f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:14:43 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:43.062850223Z" level=info msg="Removed container d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd: openshift-image-registry/image-registry-5f79c9c848-8klpv/registry" id=caa50359-53f0-4c20-ba68-3f85980707a8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.063027 2112 scope.go:115] "RemoveContainer" containerID="d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:43.063267 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd\": container with ID starting with d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd not found: ID does not exist" containerID="d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.063307 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd} err="failed to get container status \"d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd\": rpc error: code = NotFound desc = could not find container \"d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd\": container with ID starting with d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd not found: ID does not exist" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.125274 2112 reconciler.go:399] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.125309 2112 reconciler.go:399] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-bound-sa-token\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.125326 2112 reconciler.go:399] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-image-registry-private-configuration\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.125343 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-sjhwd\" (UniqueName: \"kubernetes.io/projected/84367d42-9f7a-49fb-9aab-aa7bc958829f-kube-api-access-sjhwd\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.125356 2112 reconciler.go:399] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84367d42-9f7a-49fb-9aab-aa7bc958829f-registry-certificates\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.125374 2112 reconciler.go:399] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84367d42-9f7a-49fb-9aab-aa7bc958829f-installation-pull-secrets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552-userdata-shm.mount: Succeeded. Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7eprojected-bound\x2dsa\x2dtoken.mount: Succeeded. Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7eprojected-bound\x2dsa\x2dtoken.mount: Consumed 0 CPU time Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjhwd.mount: Succeeded. Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjhwd.mount: Consumed 0 CPU time Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7eprojected-registry\x2dtls.mount: Succeeded. Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7eprojected-registry\x2dtls.mount: Consumed 0 CPU time Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7esecret-installation\x2dpull\x2dsecrets.mount: Succeeded. Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7esecret-installation\x2dpull\x2dsecrets.mount: Consumed 0 CPU time Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7esecret-image\x2dregistry\x2dprivate\x2dconfiguration.mount: Succeeded. Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-84367d42\x2d9f7a\x2d49fb\x2d9aab\x2daa7bc958829f-volumes-kubernetes.io\x7esecret-image\x2dregistry\x2dprivate\x2dconfiguration.mount: Consumed 0 CPU time Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod84367d42_9f7a_49fb_9aab_aa7bc958829f.slice. Feb 23 17:14:43 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod84367d42_9f7a_49fb_9aab_aa7bc958829f.slice: Consumed 3.624s CPU time Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.348018 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-8klpv] Feb 23 17:14:43 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:43.351864 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-image-registry/image-registry-5f79c9c848-8klpv] Feb 23 17:14:44 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:44.117631 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-image-registry/image-registry-5f79c9c848-8klpv" podUID=84367d42-9f7a-49fb-9aab-aa7bc958829f containerName="registry" containerID="cri-o://d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" gracePeriod=1 Feb 23 17:14:44 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:44.118093 2112 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd\": container with ID starting with d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd not found: ID does not exist" containerID="d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd" Feb 23 17:14:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:44.117852868Z" level=info msg="Stopping container: d5fe21f86e0e9fb8b9c0153e9789163d315360a1ed79f9bcfb0e4a1e07a79efd (timeout: 1s)" id=79bacf1b-310a-4a06-bfa1-9a0bef6e65ff name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:44.118254112Z" level=info msg="Stopping pod sandbox: d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552" id=6501902d-ec1f-400c-b0e3-b498bdad088f name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:44 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:44.118282098Z" level=info msg="Stopped pod sandbox (already stopped): d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552" id=6501902d-ec1f-400c-b0e3-b498bdad088f name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:44 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:44.119121 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=84367d42-9f7a-49fb-9aab-aa7bc958829f path="/var/lib/kubelet/pods/84367d42-9f7a-49fb-9aab-aa7bc958829f/volumes" Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.146283778Z" level=info msg="Stopping pod sandbox: 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24" id=7ea2eff9-803b-4a17-bee3-44a49f2166f1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.146321195Z" level=info msg="Stopped pod sandbox (already stopped): 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24" id=7ea2eff9-803b-4a17-bee3-44a49f2166f1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.146501012Z" level=info msg="Removing pod sandbox: 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24" id=367a0fd4-7ad6-4a3d-a378-a8948f607131 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.154829949Z" level=info msg="Removed pod sandbox: 94ff293ae607edbdb70c5e4b25cc7cf3f7e6fe35e3aa183bda9cd3e8a5654a24" id=367a0fd4-7ad6-4a3d-a378-a8948f607131 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.155312077Z" level=info msg="Stopping pod sandbox: fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f" id=eec89279-4ff6-48c9-a259-e7bf0bddf37c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.155341195Z" level=info msg="Stopped pod sandbox (already stopped): fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f" id=eec89279-4ff6-48c9-a259-e7bf0bddf37c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.155557666Z" level=info msg="Removing pod sandbox: fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f" id=d5b8d7f5-746d-433b-89a3-cff7f3ab0c48 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.163088715Z" level=info msg="Removed pod sandbox: fbdafac49bb2dff4d3e8095f9b5b04869d1bd15bc4f7fdc96a96e2c9d058d97f" id=d5b8d7f5-746d-433b-89a3-cff7f3ab0c48 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.163309095Z" level=info msg="Stopping pod sandbox: d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552" id=14eafbbe-35de-4d37-baeb-821c997d78fd name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.163340837Z" level=info msg="Stopped pod sandbox (already stopped): d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552" id=14eafbbe-35de-4d37-baeb-821c997d78fd name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.163495002Z" level=info msg="Removing pod sandbox: d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552" id=17cc2107-3f39-44c9-8d6e-89e835ace111 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.170196549Z" level=info msg="Removed pod sandbox: d4785ceb1d1a7387c9154f23fc0e0c34390aa6cca85448aef2cbe0911d40d552" id=17cc2107-3f39-44c9-8d6e-89e835ace111 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.170407483Z" level=info msg="Stopping pod sandbox: 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6" id=debaf043-3cf2-439e-9c82-14c83c8bddac name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.170446419Z" level=info msg="Stopped pod sandbox (already stopped): 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6" id=debaf043-3cf2-439e-9c82-14c83c8bddac name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.170607534Z" level=info msg="Removing pod sandbox: 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6" id=07bedd94-55c6-4957-a545-127f950b92b6 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.177992390Z" level=info msg="Removed pod sandbox: 7dbdc33b9d16d63ae4640580522adb732200c4a8f3f79a0430a98b7d876633f6" id=07bedd94-55c6-4957-a545-127f950b92b6 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.178227535Z" level=info msg="Stopping pod sandbox: ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc" id=e99b13d8-9669-4f62-9cf1-6714fa393e03 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.178253651Z" level=info msg="Stopped pod sandbox (already stopped): ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc" id=e99b13d8-9669-4f62-9cf1-6714fa393e03 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.178417295Z" level=info msg="Removing pod sandbox: ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc" id=b3be5b6e-dfd3-436b-8e0f-bf2a406753aa name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:46.185853967Z" level=info msg="Removed pod sandbox: ea37c47cff30008daba4dae0a9953293135bfac80990a6502b845d6c4a82eddc" id=b3be5b6e-dfd3-436b-8e0f-bf2a406753aa name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:14:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:46.187422 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593\": container with ID starting with 4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593 not found: ID does not exist" containerID="4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593" Feb 23 17:14:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:46.187461 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593" err="rpc error: code = NotFound desc = could not find container \"4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593\": container with ID starting with 4820220c599fd37621891944ed9fe141cf1c5d64afe7fb4083404f96a1f62593 not found: ID does not exist" Feb 23 17:14:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:46.187696 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6\": container with ID starting with e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6 not found: ID does not exist" containerID="e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6" Feb 23 17:14:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:46.187721 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6" err="rpc error: code = NotFound desc = could not find container \"e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6\": container with ID starting with e9f55e4ed6d7420444ee25c54cefa43c8e8ef15d1200f4a5598c4041c60cf6e6 not found: ID does not exist" Feb 23 17:14:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:46.187942 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb\": container with ID starting with 14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb not found: ID does not exist" containerID="14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb" Feb 23 17:14:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:46.187969 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb" err="rpc error: code = NotFound desc = could not find container \"14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb\": container with ID starting with 14c03d26750fe00987600f26e5fde9507871bca7fdd8d2fbe974f06debddb2bb not found: ID does not exist" Feb 23 17:14:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:14:46.188198 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d\": container with ID starting with e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d not found: ID does not exist" containerID="e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d" Feb 23 17:14:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:46.188221 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d" err="rpc error: code = NotFound desc = could not find container \"e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d\": container with ID starting with e7959419652f5b24a8403d86d3e316ec0f4bfc4e05009d430020eca12b89b14d not found: ID does not exist" Feb 23 17:14:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:47.275651 2112 patch_prober.go:29] interesting pod/router-default-77f788594f-j5twb container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Feb 23 17:14:47 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:14:47 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:14:47 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:14:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:47.275732 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-77f788594f-j5twb" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:14:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:47.737434596Z" level=warning msg="Found defunct process with PID 60025 (haproxy)" Feb 23 17:14:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00331|connmgr|INFO|br-int<->unix#2: 359 flow_mods in the 57 s starting 58 s ago (143 adds, 216 deletes) Feb 23 17:14:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:57.275517 2112 patch_prober.go:29] interesting pod/router-default-77f788594f-j5twb container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Feb 23 17:14:57 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:14:57 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:14:57 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:14:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:57.275574 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-77f788594f-j5twb" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:14:57 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00332|connmgr|INFO|br-ex<->unix#1186: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:14:58 ip-10-0-136-68 systemd[1]: crio-aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4.scope: Succeeded. Feb 23 17:14:58 ip-10-0-136-68 systemd[1]: crio-aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4.scope: Consumed 7.714s CPU time Feb 23 17:14:58 ip-10-0-136-68 systemd[1]: crio-conmon-aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4.scope: Succeeded. Feb 23 17:14:58 ip-10-0-136-68 systemd[1]: crio-conmon-aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4.scope: Consumed 25ms CPU time Feb 23 17:14:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-35c8c82a7a826c3fc9416d32091e8811fef58ff8edc375909f8136560d9c4885-merged.mount: Succeeded. Feb 23 17:14:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-35c8c82a7a826c3fc9416d32091e8811fef58ff8edc375909f8136560d9c4885-merged.mount: Consumed 0 CPU time Feb 23 17:14:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:58.923020227Z" level=info msg="Stopped container aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4: openshift-ingress/router-default-77f788594f-j5twb/router" id=192de3c3-da02-4211-b377-b142f9983999 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:14:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:58.923423412Z" level=info msg="Stopping pod sandbox: 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0" id=036f90b5-e897-4a00-b4ff-2ffe0b9207c1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:58.923634877Z" level=info msg="Got pod network &{Name:router-default-77f788594f-j5twb Namespace:openshift-ingress ID:259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0 UID:e7ec9547-ee4c-4966-997f-719d78dcc31b NetNS:/var/run/netns/8f6edfdf-3077-4118-8c21-14cdffcf5d50 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:14:58 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:58.923795367Z" level=info msg="Deleting pod openshift-ingress_router-default-77f788594f-j5twb from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.062448 2112 generic.go:296] "Generic (PLEG): container finished" podID=e7ec9547-ee4c-4966-997f-719d78dcc31b containerID="aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4" exitCode=0 Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.062481 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-77f788594f-j5twb" event=&{ID:e7ec9547-ee4c-4966-997f-719d78dcc31b Type:ContainerDied Data:aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4} Feb 23 17:14:59 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00333|bridge|INFO|bridge br-int: deleted interface 259ac42a2037be7 on port 16 Feb 23 17:14:59 ip-10-0-136-68 kernel: device 259ac42a2037be7 left promiscuous mode Feb 23 17:14:59 ip-10-0-136-68 crio[2062]: 2023-02-23T17:14:58Z [verbose] Del: openshift-ingress:router-default-77f788594f-j5twb:e7ec9547-ee4c-4966-997f-719d78dcc31b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:14:59 ip-10-0-136-68 crio[2062]: I0223 17:14:59.058779 60731 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8bd969b97c08c31f4b1cb849b260a403032699f3dd5f9980833d5c0f91d139bb-merged.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8bd969b97c08c31f4b1cb849b260a403032699f3dd5f9980833d5c0f91d139bb-merged.mount: Consumed 0 CPU time Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: run-utsns-8f6edfdf\x2d3077\x2d4118\x2d8c21\x2d14cdffcf5d50.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: run-utsns-8f6edfdf\x2d3077\x2d4118\x2d8c21\x2d14cdffcf5d50.mount: Consumed 0 CPU time Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: run-ipcns-8f6edfdf\x2d3077\x2d4118\x2d8c21\x2d14cdffcf5d50.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: run-ipcns-8f6edfdf\x2d3077\x2d4118\x2d8c21\x2d14cdffcf5d50.mount: Consumed 0 CPU time Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: run-netns-8f6edfdf\x2d3077\x2d4118\x2d8c21\x2d14cdffcf5d50.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: run-netns-8f6edfdf\x2d3077\x2d4118\x2d8c21\x2d14cdffcf5d50.mount: Consumed 0 CPU time Feb 23 17:14:59 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:14:59.635822327Z" level=info msg="Stopped pod sandbox: 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0" id=036f90b5-e897-4a00-b4ff-2ffe0b9207c1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.733690 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-default-certificate\") pod \"e7ec9547-ee4c-4966-997f-719d78dcc31b\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.733772 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfgv8\" (UniqueName: \"kubernetes.io/projected/e7ec9547-ee4c-4966-997f-719d78dcc31b-kube-api-access-jfgv8\") pod \"e7ec9547-ee4c-4966-997f-719d78dcc31b\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.733812 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-metrics-certs\") pod \"e7ec9547-ee4c-4966-997f-719d78dcc31b\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.733838 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ec9547-ee4c-4966-997f-719d78dcc31b-service-ca-bundle\") pod \"e7ec9547-ee4c-4966-997f-719d78dcc31b\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.733869 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-stats-auth\") pod \"e7ec9547-ee4c-4966-997f-719d78dcc31b\" (UID: \"e7ec9547-ee4c-4966-997f-719d78dcc31b\") " Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:14:59.734288 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/e7ec9547-ee4c-4966-997f-719d78dcc31b/volumes/kubernetes.io~configmap/service-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.734494 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ec9547-ee4c-4966-997f-719d78dcc31b-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "e7ec9547-ee4c-4966-997f-719d78dcc31b" (UID: "e7ec9547-ee4c-4966-997f-719d78dcc31b"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.743174 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "e7ec9547-ee4c-4966-997f-719d78dcc31b" (UID: "e7ec9547-ee4c-4966-997f-719d78dcc31b"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.745004 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "e7ec9547-ee4c-4966-997f-719d78dcc31b" (UID: "e7ec9547-ee4c-4966-997f-719d78dcc31b"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.745058 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ec9547-ee4c-4966-997f-719d78dcc31b-kube-api-access-jfgv8" (OuterVolumeSpecName: "kube-api-access-jfgv8") pod "e7ec9547-ee4c-4966-997f-719d78dcc31b" (UID: "e7ec9547-ee4c-4966-997f-719d78dcc31b"). InnerVolumeSpecName "kube-api-access-jfgv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.745991 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "e7ec9547-ee4c-4966-997f-719d78dcc31b" (UID: "e7ec9547-ee4c-4966-997f-719d78dcc31b"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.835091 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-jfgv8\" (UniqueName: \"kubernetes.io/projected/e7ec9547-ee4c-4966-997f-719d78dcc31b-kube-api-access-jfgv8\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.835124 2112 reconciler.go:399] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ec9547-ee4c-4966-997f-719d78dcc31b-service-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.835138 2112 reconciler.go:399] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-metrics-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.835151 2112 reconciler.go:399] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-stats-auth\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:14:59.835166 2112 reconciler.go:399] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7ec9547-ee4c-4966-997f-719d78dcc31b-default-certificate\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0-userdata-shm.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e7ec9547\x2dee4c\x2d4966\x2d997f\x2d719d78dcc31b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfgv8.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e7ec9547\x2dee4c\x2d4966\x2d997f\x2d719d78dcc31b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfgv8.mount: Consumed 0 CPU time Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e7ec9547\x2dee4c\x2d4966\x2d997f\x2d719d78dcc31b-volumes-kubernetes.io\x7esecret-stats\x2dauth.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e7ec9547\x2dee4c\x2d4966\x2d997f\x2d719d78dcc31b-volumes-kubernetes.io\x7esecret-stats\x2dauth.mount: Consumed 0 CPU time Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e7ec9547\x2dee4c\x2d4966\x2d997f\x2d719d78dcc31b-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e7ec9547\x2dee4c\x2d4966\x2d997f\x2d719d78dcc31b-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Consumed 0 CPU time Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e7ec9547\x2dee4c\x2d4966\x2d997f\x2d719d78dcc31b-volumes-kubernetes.io\x7esecret-default\x2dcertificate.mount: Succeeded. Feb 23 17:14:59 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e7ec9547\x2dee4c\x2d4966\x2d997f\x2d719d78dcc31b-volumes-kubernetes.io\x7esecret-default\x2dcertificate.mount: Consumed 0 CPU time Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.065163 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-77f788594f-j5twb" event=&{ID:e7ec9547-ee4c-4966-997f-719d78dcc31b Type:ContainerDied Data:259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0} Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.065207 2112 scope.go:115] "RemoveContainer" containerID="aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4" Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.066096496Z" level=info msg="Removing container: aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4" id=35ae195c-d22f-41a5-9fbf-e9eb526ce9a6 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:15:00 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pode7ec9547_ee4c_4966_997f_719d78dcc31b.slice. Feb 23 17:15:00 ip-10-0-136-68 systemd[1]: kubepods-burstable-pode7ec9547_ee4c_4966_997f_719d78dcc31b.slice: Consumed 7.740s CPU time Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.090386 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ingress/router-default-77f788594f-j5twb] Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.094975 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-ingress/router-default-77f788594f-j5twb] Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.097903460Z" level=info msg="Removed container aa5e7c96b987acbdee2223e102d68db343b44e03e5112322e17f7cd3f959eed4: openshift-ingress/router-default-77f788594f-j5twb/router" id=35ae195c-d22f-41a5-9fbf-e9eb526ce9a6 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.121118 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e7ec9547-ee4c-4966-997f-719d78dcc31b path="/var/lib/kubelet/pods/e7ec9547-ee4c-4966-997f-719d78dcc31b/volumes" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.185456 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw] Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.185505 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:15:00.185606 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7ec9547-ee4c-4966-997f-719d78dcc31b" containerName="router" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.185615 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ec9547-ee4c-4966-997f-719d78dcc31b" containerName="router" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:15:00.185625 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84367d42-9f7a-49fb-9aab-aa7bc958829f" containerName="registry" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.185630 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="84367d42-9f7a-49fb-9aab-aa7bc958829f" containerName="registry" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.185698 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="e7ec9547-ee4c-4966-997f-719d78dcc31b" containerName="router" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.185720 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="84367d42-9f7a-49fb-9aab-aa7bc958829f" containerName="registry" Feb 23 17:15:00 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod9677e3d7_a54b_481d_b0af_680e529ee92d.slice. Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.195649 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw] Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.237477 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9677e3d7-a54b-481d-b0af-680e529ee92d-config-volume\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.237515 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lnpd\" (UniqueName: \"kubernetes.io/projected/9677e3d7-a54b-481d-b0af-680e529ee92d-kube-api-access-7lnpd\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.237691 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9677e3d7-a54b-481d-b0af-680e529ee92d-secret-volume\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.338063 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9677e3d7-a54b-481d-b0af-680e529ee92d-secret-volume\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.338122 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9677e3d7-a54b-481d-b0af-680e529ee92d-config-volume\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.338163 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-7lnpd\" (UniqueName: \"kubernetes.io/projected/9677e3d7-a54b-481d-b0af-680e529ee92d-kube-api-access-7lnpd\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.338931 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9677e3d7-a54b-481d-b0af-680e529ee92d-config-volume\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.340312 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9677e3d7-a54b-481d-b0af-680e529ee92d-secret-volume\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.353652 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lnpd\" (UniqueName: \"kubernetes.io/projected/9677e3d7-a54b-481d-b0af-680e529ee92d-kube-api-access-7lnpd\") pod \"collect-profiles-27952875-qw7rw\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.499702 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.500192517Z" level=info msg="Running pod sandbox: openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw/POD" id=7dc77754-6713-4e3c-8089-ae563837f9d1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.500255717Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.521971822Z" level=info msg="Got pod network &{Name:collect-profiles-27952875-qw7rw Namespace:openshift-operator-lifecycle-manager ID:76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3 UID:9677e3d7-a54b-481d-b0af-680e529ee92d NetNS:/var/run/netns/efdbc899-f3f5-433b-9db6-024222c8f883 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.522000873Z" level=info msg="Adding pod openshift-operator-lifecycle-manager_collect-profiles-27952875-qw7rw to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:15:00 ip-10-0-136-68 systemd-udevd[60819]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:15:00 ip-10-0-136-68 systemd-udevd[60819]: Could not generate persistent MAC address for 76b4c6af287a0c9: No such file or directory Feb 23 17:15:00 ip-10-0-136-68 NetworkManager[1147]: [1677172500.6752] device (76b4c6af287a0c9): carrier: link connected Feb 23 17:15:00 ip-10-0-136-68 NetworkManager[1147]: [1677172500.6755] manager: (76b4c6af287a0c9): new Veth device (/org/freedesktop/NetworkManager/Devices/66) Feb 23 17:15:00 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 76b4c6af287a0c9: link is not ready Feb 23 17:15:00 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:15:00 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:15:00 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 76b4c6af287a0c9: link becomes ready Feb 23 17:15:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00334|bridge|INFO|bridge br-int: added interface 76b4c6af287a0c9 on port 28 Feb 23 17:15:00 ip-10-0-136-68 NetworkManager[1147]: [1677172500.7000] manager: (76b4c6af287a0c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67) Feb 23 17:15:00 ip-10-0-136-68 kernel: device 76b4c6af287a0c9 entered promiscuous mode Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:00.772628 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw] Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: I0223 17:15:00.649634 60802 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: 2023-02-23T17:15:00Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-27952875-qw7rw:9677e3d7-a54b-481d-b0af-680e529ee92d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"76b4c6af287a0c9","mac":"82:e1:4a:70:db:d2"},{"name":"eth0","mac":"0a:58:0a:81:02:1e","sandbox":"/var/run/netns/efdbc899-f3f5-433b-9db6-024222c8f883"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.30/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: I0223 17:15:00.750705 60795 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-27952875-qw7rw", UID:"9677e3d7-a54b-481d-b0af-680e529ee92d", APIVersion:"v1", ResourceVersion:"70244", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.30/23] from ovn-kubernetes Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.773437428Z" level=info msg="Got pod network &{Name:collect-profiles-27952875-qw7rw Namespace:openshift-operator-lifecycle-manager ID:76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3 UID:9677e3d7-a54b-481d-b0af-680e529ee92d NetNS:/var/run/netns/efdbc899-f3f5-433b-9db6-024222c8f883 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.773596599Z" level=info msg="Checking pod openshift-operator-lifecycle-manager_collect-profiles-27952875-qw7rw for CNI network multus-cni-network (type=multus)" Feb 23 17:15:00 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:15:00.777387 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9677e3d7_a54b_481d_b0af_680e529ee92d.slice/crio-76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3.scope WatchSource:0}: Error finding container 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3: Status 404 returned error can't find the container with id 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3 Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.778985611Z" level=info msg="Ran pod sandbox 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3 with infra container: openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw/POD" id=7dc77754-6713-4e3c-8089-ae563837f9d1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.779807702Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca" id=eb89788e-4b3a-488e-abc0-264b10e3385f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.779990890Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca not found" id=eb89788e-4b3a-488e-abc0-264b10e3385f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.780494347Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca" id=bf3772d6-a934-451a-a09e-e8468954b923 name=/runtime.v1.ImageService/PullImage Feb 23 17:15:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:00.781969743Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca\"" Feb 23 17:15:01 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:01.067921 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" event=&{ID:9677e3d7-a54b-481d-b0af-680e529ee92d Type:ContainerStarted Data:76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3} Feb 23 17:15:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:08.842529269Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca\"" Feb 23 17:15:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:09.749727689Z" level=warning msg="Found defunct process with PID 60310 (haproxy)" Feb 23 17:15:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:09.749854356Z" level=warning msg="Found defunct process with PID 60840 (haproxy)" Feb 23 17:15:12 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00335|connmgr|INFO|br-ex<->unix#1191: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:15:14 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00336|connmgr|INFO|br-ex<->unix#1194: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:15:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:22.336347184Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca" id=bf3772d6-a934-451a-a09e-e8468954b923 name=/runtime.v1.ImageService/PullImage Feb 23 17:15:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:22.337384688Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca" id=2b236ca5-d241-46e4-9a71-b4be2e76b5a4 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:22.338827813Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d18c9ec1cf4fc492de5643229404fefce6842ed44c5d14b27a69b5249995b8fa,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca],Size_:700773392,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=2b236ca5-d241-46e4-9a71-b4be2e76b5a4 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:22.339518111Z" level=info msg="Creating container: openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw/collect-profiles" id=979fbf92-6d25-480f-b8e3-b69a8ea2580d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:15:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:22.339602772Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:15:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a.scope. Feb 23 17:15:22 ip-10-0-136-68 systemd[1]: Started libcontainer container f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a. Feb 23 17:15:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:22.494605060Z" level=info msg="Created container f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a: openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw/collect-profiles" id=979fbf92-6d25-480f-b8e3-b69a8ea2580d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:15:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:22.495083236Z" level=info msg="Starting container: f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a" id=f7d3cd78-e148-4eb6-840a-64733169d8d3 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:15:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:22.514337513Z" level=info msg="Started container" PID=61151 containerID=f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a description=openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw/collect-profiles id=f7d3cd78-e148-4eb6-840a-64733169d8d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3 Feb 23 17:15:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:23.122079 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" event=&{ID:9677e3d7-a54b-481d-b0af-680e529ee92d Type:ContainerStarted Data:f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a} Feb 23 17:15:23 ip-10-0-136-68 systemd[1]: crio-f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a.scope: Succeeded. Feb 23 17:15:23 ip-10-0-136-68 systemd[1]: crio-f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a.scope: Consumed 821ms CPU time Feb 23 17:15:23 ip-10-0-136-68 systemd[1]: crio-conmon-f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a.scope: Succeeded. Feb 23 17:15:23 ip-10-0-136-68 systemd[1]: crio-conmon-f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a.scope: Consumed 25ms CPU time Feb 23 17:15:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:24.124134 2112 generic.go:296] "Generic (PLEG): container finished" podID=9677e3d7-a54b-481d-b0af-680e529ee92d containerID="f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a" exitCode=0 Feb 23 17:15:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:24.124171 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" event=&{ID:9677e3d7-a54b-481d-b0af-680e529ee92d Type:ContainerDied Data:f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a} Feb 23 17:15:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:25.126157141Z" level=info msg="Stopping pod sandbox: 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3" id=9cbbcc33-16bf-4d5c-a3d4-19d58088cdcb name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:15:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:25.126383265Z" level=info msg="Got pod network &{Name:collect-profiles-27952875-qw7rw Namespace:openshift-operator-lifecycle-manager ID:76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3 UID:9677e3d7-a54b-481d-b0af-680e529ee92d NetNS:/var/run/netns/efdbc899-f3f5-433b-9db6-024222c8f883 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:15:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:25.126494554Z" level=info msg="Deleting pod openshift-operator-lifecycle-manager_collect-profiles-27952875-qw7rw from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:15:25 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00337|bridge|INFO|bridge br-int: deleted interface 76b4c6af287a0c9 on port 28 Feb 23 17:15:25 ip-10-0-136-68 kernel: device 76b4c6af287a0c9 left promiscuous mode Feb 23 17:15:25 ip-10-0-136-68 crio[2062]: 2023-02-23T17:15:25Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-27952875-qw7rw:9677e3d7-a54b-481d-b0af-680e529ee92d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:15:25 ip-10-0-136-68 crio[2062]: I0223 17:15:25.258092 61252 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:15:25 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-eecc1d19a1b1c68257d0354ce0b950c7f7e90d5f428da5471063561590db06fb-merged.mount: Succeeded. Feb 23 17:15:25 ip-10-0-136-68 systemd[1]: run-utsns-efdbc899\x2df3f5\x2d433b\x2d9db6\x2d024222c8f883.mount: Succeeded. Feb 23 17:15:25 ip-10-0-136-68 systemd[1]: run-ipcns-efdbc899\x2df3f5\x2d433b\x2d9db6\x2d024222c8f883.mount: Succeeded. Feb 23 17:15:25 ip-10-0-136-68 systemd[1]: run-netns-efdbc899\x2df3f5\x2d433b\x2d9db6\x2d024222c8f883.mount: Succeeded. Feb 23 17:15:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3-userdata-shm.mount: Succeeded. Feb 23 17:15:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:25.772821032Z" level=info msg="Stopped pod sandbox: 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3" id=9cbbcc33-16bf-4d5c-a3d4-19d58088cdcb name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:15:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:25.905844 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9677e3d7-a54b-481d-b0af-680e529ee92d-config-volume\") pod \"9677e3d7-a54b-481d-b0af-680e529ee92d\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " Feb 23 17:15:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:25.905897 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9677e3d7-a54b-481d-b0af-680e529ee92d-secret-volume\") pod \"9677e3d7-a54b-481d-b0af-680e529ee92d\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " Feb 23 17:15:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:25.905921 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lnpd\" (UniqueName: \"kubernetes.io/projected/9677e3d7-a54b-481d-b0af-680e529ee92d-kube-api-access-7lnpd\") pod \"9677e3d7-a54b-481d-b0af-680e529ee92d\" (UID: \"9677e3d7-a54b-481d-b0af-680e529ee92d\") " Feb 23 17:15:25 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:15:25.906167 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/9677e3d7-a54b-481d-b0af-680e529ee92d/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled Feb 23 17:15:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:25.906377 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9677e3d7-a54b-481d-b0af-680e529ee92d-config-volume" (OuterVolumeSpecName: "config-volume") pod "9677e3d7-a54b-481d-b0af-680e529ee92d" (UID: "9677e3d7-a54b-481d-b0af-680e529ee92d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:15:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9677e3d7\x2da54b\x2d481d\x2db0af\x2d680e529ee92d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7lnpd.mount: Succeeded. Feb 23 17:15:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9677e3d7\x2da54b\x2d481d\x2db0af\x2d680e529ee92d-volumes-kubernetes.io\x7esecret-secret\x2dvolume.mount: Succeeded. Feb 23 17:15:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:25.914049 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9677e3d7-a54b-481d-b0af-680e529ee92d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9677e3d7-a54b-481d-b0af-680e529ee92d" (UID: "9677e3d7-a54b-481d-b0af-680e529ee92d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:15:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:25.916023 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9677e3d7-a54b-481d-b0af-680e529ee92d-kube-api-access-7lnpd" (OuterVolumeSpecName: "kube-api-access-7lnpd") pod "9677e3d7-a54b-481d-b0af-680e529ee92d" (UID: "9677e3d7-a54b-481d-b0af-680e529ee92d"). InnerVolumeSpecName "kube-api-access-7lnpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:15:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:26.006283 2112 reconciler.go:399] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9677e3d7-a54b-481d-b0af-680e529ee92d-config-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:26.006310 2112 reconciler.go:399] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9677e3d7-a54b-481d-b0af-680e529ee92d-secret-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:26.006328 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-7lnpd\" (UniqueName: \"kubernetes.io/projected/9677e3d7-a54b-481d-b0af-680e529ee92d-kube-api-access-7lnpd\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:26 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod9677e3d7_a54b_481d_b0af_680e529ee92d.slice. Feb 23 17:15:26 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod9677e3d7_a54b_481d_b0af_680e529ee92d.slice: Consumed 847ms CPU time Feb 23 17:15:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:26.128858 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw" event=&{ID:9677e3d7-a54b-481d-b0af-680e529ee92d Type:ContainerDied Data:76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3} Feb 23 17:15:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:26.128884 2112 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3" Feb 23 17:15:29 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00338|connmgr|INFO|br-ex<->unix#1202: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:15:40 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00339|connmgr|INFO|br-ex<->unix#1206: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:15:40 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00340|connmgr|INFO|br-ex<->unix#1210: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.364735 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j] Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.364788 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:15:41.364842 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9677e3d7-a54b-481d-b0af-680e529ee92d" containerName="collect-profiles" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.364850 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="9677e3d7-a54b-481d-b0af-680e529ee92d" containerName="collect-profiles" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.364887 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="9677e3d7-a54b-481d-b0af-680e529ee92d" containerName="collect-profiles" Feb 23 17:15:41 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod46cf33e4_fc3b_4f7a_b0ab_dc2cbc5a5e77.slice. Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.399221 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j] Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505614 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-mountpoint-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505684 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-dev-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505728 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8fq\" (UniqueName: \"kubernetes.io/projected/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-kube-api-access-bs8fq\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505757 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-data-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505785 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-volumes-map\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-volumes-map\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505827 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-config\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505857 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-socket-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505893 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-registration-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505922 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-plugins-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.505962 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"shared-resource-csi-driver-node-metrics-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-shared-resource-csi-driver-node-metrics-serving-cert\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00341|connmgr|INFO|br-ex<->unix#1213: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606712 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-registration-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606765 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-plugins-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606803 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"shared-resource-csi-driver-node-metrics-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-shared-resource-csi-driver-node-metrics-serving-cert\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606837 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-mountpoint-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606865 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-dev-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606894 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8fq\" (UniqueName: \"kubernetes.io/projected/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-kube-api-access-bs8fq\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606921 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-data-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606952 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"csi-volumes-map\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-volumes-map\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606966 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-registration-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606993 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-config\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.606991 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-plugins-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.607032 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-socket-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.607033 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-dev-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.607214 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-mountpoint-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.607364 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-data-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.607496 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"csi-volumes-map\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-volumes-map\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.607758 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-socket-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.608218 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-config\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.609855 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"shared-resource-csi-driver-node-metrics-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-shared-resource-csi-driver-node-metrics-serving-cert\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.621354 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8fq\" (UniqueName: \"kubernetes.io/projected/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-kube-api-access-bs8fq\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.678038 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.678485206Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=8f7bd76f-7ebd-4f54-b27d-5ff23c00108f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.678548117Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.697476185Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.697501201Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:15:41 ip-10-0-136-68 systemd-udevd[61469]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:15:41 ip-10-0-136-68 systemd-udevd[61469]: Could not generate persistent MAC address for 904f3beae60de67: No such file or directory Feb 23 17:15:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 904f3beae60de67: link is not ready Feb 23 17:15:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:15:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:15:41 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 904f3beae60de67: link becomes ready Feb 23 17:15:41 ip-10-0-136-68 NetworkManager[1147]: [1677172541.8648] device (904f3beae60de67): carrier: link connected Feb 23 17:15:41 ip-10-0-136-68 NetworkManager[1147]: [1677172541.8651] manager: (904f3beae60de67): new Veth device (/org/freedesktop/NetworkManager/Devices/68) Feb 23 17:15:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00342|bridge|INFO|bridge br-int: added interface 904f3beae60de67 on port 29 Feb 23 17:15:41 ip-10-0-136-68 NetworkManager[1147]: [1677172541.8778] manager: (904f3beae60de67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69) Feb 23 17:15:41 ip-10-0-136-68 kernel: device 904f3beae60de67 entered promiscuous mode Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:41.947900 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j] Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: I0223 17:15:41.828342 61459 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: 2023-02-23T17:15:41Z [verbose] Add: openshift-cluster-csi-drivers:shared-resource-csi-driver-node-vf69j:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"904f3beae60de67","mac":"fa:83:59:15:c9:97"},{"name":"eth0","mac":"0a:58:0a:81:02:1f","sandbox":"/var/run/netns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.31/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: I0223 17:15:41.931297 61452 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-cluster-csi-drivers", Name:"shared-resource-csi-driver-node-vf69j", UID:"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77", APIVersion:"v1", ResourceVersion:"71083", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.31/23] from ovn-kubernetes Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.948638537Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.948796813Z" level=info msg="Checking pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j for CNI network multus-cni-network (type=multus)" Feb 23 17:15:41 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:15:41.951031 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46cf33e4_fc3b_4f7a_b0ab_dc2cbc5a5e77.slice/crio-904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753.scope WatchSource:0}: Error finding container 904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753: Status 404 returned error can't find the container with id 904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753 Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.952535970Z" level=info msg="Ran pod sandbox 904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753 with infra container: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=8f7bd76f-7ebd-4f54-b27d-5ff23c00108f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.953345957Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=4345e2c1-f1a9-48e3-8e32-8c7da54f3caf name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.953554102Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7 not found" id=4345e2c1-f1a9-48e3-8e32-8c7da54f3caf name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.954181704Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=e121585c-d766-44b1-a57d-6e2b56f33a2a name=/runtime.v1.ImageService/PullImage Feb 23 17:15:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:41.956393017Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7\"" Feb 23 17:15:42 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:42.170783 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" event=&{ID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Type:ContainerStarted Data:904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753} Feb 23 17:15:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:46.190265961Z" level=info msg="Stopping pod sandbox: 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0" id=0dea26d4-e224-47b3-ad59-2b290081db69 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:15:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:46.190312848Z" level=info msg="Stopped pod sandbox (already stopped): 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0" id=0dea26d4-e224-47b3-ad59-2b290081db69 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:15:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:46.190550455Z" level=info msg="Removing pod sandbox: 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0" id=a169362b-8761-4853-b0ca-9c9c096e9d3e name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:15:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:46.198525911Z" level=info msg="Removed pod sandbox: 259ac42a2037be7ffdb933e1dab0fbcfd7804c32d742695f5c6e1e4f9e2767c0" id=a169362b-8761-4853-b0ca-9c9c096e9d3e name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:15:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:47.076187161Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7\"" Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.568867785Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=e121585c-d766-44b1-a57d-6e2b56f33a2a name=/runtime.v1.ImageService/PullImage Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.569578177Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=02bf3f5c-3c8b-4461-94bf-44b2ca798413 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.570777740Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ae90363e0687fc12bc8ed8a2a77d165dc67626c1a60ee8d602e0319b2f949960,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7],Size_:368500613,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=02bf3f5c-3c8b-4461-94bf-44b2ca798413 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.571480471Z" level=info msg="Creating container: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/node-driver-registrar" id=6965df8b-9a3e-42ac-b2b7-1827787d5862 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.571563286Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:15:51 ip-10-0-136-68 systemd[1]: Started crio-conmon-50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb.scope. Feb 23 17:15:51 ip-10-0-136-68 systemd[1]: Started libcontainer container 50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb. Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.733548790Z" level=info msg="Created container 50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/node-driver-registrar" id=6965df8b-9a3e-42ac-b2b7-1827787d5862 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.734015772Z" level=info msg="Starting container: 50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb" id=27073bd5-78cd-4606-bbd5-13731775571b name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.752509306Z" level=info msg="Started container" PID=61619 containerID=50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb description=openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/node-driver-registrar id=27073bd5-78cd-4606-bbd5-13731775571b name=/runtime.v1.RuntimeService/StartContainer sandboxID=904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753 Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.763239443Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0036fbe19ef488958fd96e4d82bab5ce8e78fc0e90f207e9a330bac45cd98017" id=33627fda-10a3-4d72-b546-1767b0c36cb5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.763401734Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0036fbe19ef488958fd96e4d82bab5ce8e78fc0e90f207e9a330bac45cd98017 not found" id=33627fda-10a3-4d72-b546-1767b0c36cb5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.763959115Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0036fbe19ef488958fd96e4d82bab5ce8e78fc0e90f207e9a330bac45cd98017" id=1d7d1d7c-f730-4c61-8962-53fc3ece115c name=/runtime.v1.ImageService/PullImage Feb 23 17:15:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:51.764828375Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0036fbe19ef488958fd96e4d82bab5ce8e78fc0e90f207e9a330bac45cd98017\"" Feb 23 17:15:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:52.188210 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" event=&{ID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Type:ContainerStarted Data:50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb} Feb 23 17:15:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:52.765277948Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0036fbe19ef488958fd96e4d82bab5ce8e78fc0e90f207e9a330bac45cd98017\"" Feb 23 17:15:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:52.951137 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-cluster-node-tuning-operator/tuned-bjpgx] Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: crio-conmon-d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6.scope: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: crio-conmon-d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6.scope: Consumed 25ms CPU time Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.171653 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" podUID=07267a40-e316-4a88-91a5-11bc06672f23 containerName="tuned" containerID="cri-o://d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6" gracePeriod=30 Feb 23 17:15:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:53.172105326Z" level=info msg="Stopping container: d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6 (timeout: 30s)" id=1c1e51f4-895f-44b5-94ef-e0298689738a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: crio-d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6.scope: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: crio-d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6.scope: Consumed 2.660s CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-721531f02f3bdd0bed2587664d57261ca54d83dec0db78372efe399d492c9f37-merged.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-721531f02f3bdd0bed2587664d57261ca54d83dec0db78372efe399d492c9f37-merged.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:53.229712222Z" level=info msg="Stopped container d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6: openshift-cluster-node-tuning-operator/tuned-bjpgx/tuned" id=1c1e51f4-895f-44b5-94ef-e0298689738a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:15:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:53.230317038Z" level=info msg="Stopping pod sandbox: 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05" id=c67283de-0097-4195-b167-4ab1bd10990a name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-92b0200ed74769f17a309445d33dbca5f7d9b42381cc3b7b7485659343a3a56e-merged.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-92b0200ed74769f17a309445d33dbca5f7d9b42381cc3b7b7485659343a3a56e-merged.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: run-utsns-9efda56e\x2dc5a2\x2d45e0\x2db85d\x2d947c4baa03f9.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: run-utsns-9efda56e\x2dc5a2\x2d45e0\x2db85d\x2d947c4baa03f9.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:53.306703868Z" level=info msg="Stopped pod sandbox: 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05" id=c67283de-0097-4195-b167-4ab1bd10990a name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488535 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host\") pod \"07267a40-e316-4a88-91a5-11bc06672f23\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488795 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data\") pod \"07267a40-e316-4a88-91a5-11bc06672f23\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488612 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host" (OuterVolumeSpecName: "host") pod "07267a40-e316-4a88-91a5-11bc06672f23" (UID: "07267a40-e316-4a88-91a5-11bc06672f23"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488823 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system\") pod \"07267a40-e316-4a88-91a5-11bc06672f23\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488843 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules\") pod \"07267a40-e316-4a88-91a5-11bc06672f23\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488864 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys\") pod \"07267a40-e316-4a88-91a5-11bc06672f23\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488884 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc\") pod \"07267a40-e316-4a88-91a5-11bc06672f23\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488906 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus\") pod \"07267a40-e316-4a88-91a5-11bc06672f23\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.488932 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-796v8\" (UniqueName: \"kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8\") pod \"07267a40-e316-4a88-91a5-11bc06672f23\" (UID: \"07267a40-e316-4a88-91a5-11bc06672f23\") " Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.489033 2112 reconciler.go:399] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-host\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:15:53.489038 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volumes/kubernetes.io~configmap/var-lib-tuned-profiles-data: clearQuota called, but quotas disabled Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.489222 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data" (OuterVolumeSpecName: "var-lib-tuned-profiles-data") pod "07267a40-e316-4a88-91a5-11bc06672f23" (UID: "07267a40-e316-4a88-91a5-11bc06672f23"). InnerVolumeSpecName "var-lib-tuned-profiles-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.489252 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys" (OuterVolumeSpecName: "sys") pod "07267a40-e316-4a88-91a5-11bc06672f23" (UID: "07267a40-e316-4a88-91a5-11bc06672f23"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.489273 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system" (OuterVolumeSpecName: "run-systemd-system") pod "07267a40-e316-4a88-91a5-11bc06672f23" (UID: "07267a40-e316-4a88-91a5-11bc06672f23"). InnerVolumeSpecName "run-systemd-system". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.489292 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "07267a40-e316-4a88-91a5-11bc06672f23" (UID: "07267a40-e316-4a88-91a5-11bc06672f23"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.489473 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus" (OuterVolumeSpecName: "var-run-dbus") pod "07267a40-e316-4a88-91a5-11bc06672f23" (UID: "07267a40-e316-4a88-91a5-11bc06672f23"). InnerVolumeSpecName "var-run-dbus". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.503851 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8" (OuterVolumeSpecName: "kube-api-access-796v8") pod "07267a40-e316-4a88-91a5-11bc06672f23" (UID: "07267a40-e316-4a88-91a5-11bc06672f23"). InnerVolumeSpecName "kube-api-access-796v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.576803 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc" (OuterVolumeSpecName: "etc") pod "07267a40-e316-4a88-91a5-11bc06672f23" (UID: "07267a40-e316-4a88-91a5-11bc06672f23"). InnerVolumeSpecName "etc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.589187 2112 reconciler.go:399] "Volume detached for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/07267a40-e316-4a88-91a5-11bc06672f23-var-lib-tuned-profiles-data\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.589209 2112 reconciler.go:399] "Volume detached for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-run-systemd-system\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.589220 2112 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-lib-modules\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.589229 2112 reconciler.go:399] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-sys\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.589236 2112 reconciler.go:399] "Volume detached for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-etc\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.589244 2112 reconciler.go:399] "Volume detached for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/07267a40-e316-4a88-91a5-11bc06672f23-var-run-dbus\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:53.589253 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-796v8\" (UniqueName: \"kubernetes.io/projected/07267a40-e316-4a88-91a5-11bc06672f23-kube-api-access-796v8\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-5.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-5.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-4.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-4.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-3.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-3.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-2.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-2.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-1.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volume\x2dsubpaths-etc-tuned-1.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: run-netns-9efda56e\x2dc5a2\x2d45e0\x2db85d\x2d947c4baa03f9.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: run-netns-9efda56e\x2dc5a2\x2d45e0\x2db85d\x2d947c4baa03f9.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: run-ipcns-9efda56e\x2dc5a2\x2d45e0\x2db85d\x2d947c4baa03f9.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: run-ipcns-9efda56e\x2dc5a2\x2d45e0\x2db85d\x2d947c4baa03f9.mount: Consumed 0 CPU time Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d796v8.mount: Succeeded. Feb 23 17:15:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-07267a40\x2de316\x2d4a88\x2d91a5\x2d11bc06672f23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d796v8.mount: Consumed 0 CPU time Feb 23 17:15:54 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod07267a40_e316_4a88_91a5_11bc06672f23.slice. Feb 23 17:15:54 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod07267a40_e316_4a88_91a5_11bc06672f23.slice: Consumed 2.685s CPU time Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.192743 2112 generic.go:296] "Generic (PLEG): container finished" podID=07267a40-e316-4a88-91a5-11bc06672f23 containerID="d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6" exitCode=0 Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.192776 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" event=&{ID:07267a40-e316-4a88-91a5-11bc06672f23 Type:ContainerDied Data:d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6} Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.192795 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-bjpgx" event=&{ID:07267a40-e316-4a88-91a5-11bc06672f23 Type:ContainerDied Data:528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05} Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.192812 2112 scope.go:115] "RemoveContainer" containerID="d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6" Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.193985250Z" level=info msg="Removing container: d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6" id=24e72bb4-e613-477c-b221-93088eb98701 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.214676 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-cluster-node-tuning-operator/tuned-bjpgx] Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.224769 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-cluster-node-tuning-operator/tuned-bjpgx] Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.251927 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-cluster-node-tuning-operator/tuned-zzwb5] Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.251972 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:15:54.252036 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07267a40-e316-4a88-91a5-11bc06672f23" containerName="tuned" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.252048 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="07267a40-e316-4a88-91a5-11bc06672f23" containerName="tuned" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.252088 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="07267a40-e316-4a88-91a5-11bc06672f23" containerName="tuned" Feb 23 17:15:54 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-poda5ccef55_3f5c_4ffc_82f9_586324e62a37.slice. Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.262843208Z" level=info msg="Removed container d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6: openshift-cluster-node-tuning-operator/tuned-bjpgx/tuned" id=24e72bb4-e613-477c-b221-93088eb98701 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.263030 2112 scope.go:115] "RemoveContainer" containerID="d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:15:54.263817 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6\": container with ID starting with d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6 not found: ID does not exist" containerID="d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.263852 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6} err="failed to get container status \"d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6\": rpc error: code = NotFound desc = could not find container \"d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6\": container with ID starting with d25ff7e177b079f5368f13976bda6b318b800426c4ec53a4783b0f0a406d5bb6 not found: ID does not exist" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.393469 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-sys\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.393508 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-run-dbus\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.393537 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-etc\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.393639 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-lib-tuned-profiles-data\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.393682 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-host\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.393737 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-lib-modules\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.393756 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-run-systemd-system\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.393790 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntbd\" (UniqueName: \"kubernetes.io/projected/a5ccef55-3f5c-4ffc-82f9-586324e62a37-kube-api-access-tntbd\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494034 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-etc\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494086 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-host\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494114 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-lib-tuned-profiles-data\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494142 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-lib-modules\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494173 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-run-systemd-system\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494200 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-etc\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494204 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-tntbd\" (UniqueName: \"kubernetes.io/projected/a5ccef55-3f5c-4ffc-82f9-586324e62a37-kube-api-access-tntbd\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494303 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-sys\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494367 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-run-dbus\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494512 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-host\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494795 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-lib-modules\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494819 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-run-dbus\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494866 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-sys\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.494968 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-run-systemd-system\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.495007 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-lib-tuned-profiles-data\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.510479 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-tntbd\" (UniqueName: \"kubernetes.io/projected/a5ccef55-3f5c-4ffc-82f9-586324e62a37-kube-api-access-tntbd\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:54.565596 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.566073999Z" level=info msg="Running pod sandbox: openshift-cluster-node-tuning-operator/tuned-zzwb5/POD" id=dcb45c3e-5736-4332-8985-2231f93a6698 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.566129096Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.840702058Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=dcb45c3e-5736-4332-8985-2231f93a6698 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:15:54 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:15:54.844681 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5ccef55_3f5c_4ffc_82f9_586324e62a37.slice/crio-c4955e2bb6ed54ae807213d28434394576684187194708bbbf49ca144840f17b.scope WatchSource:0}: Error finding container c4955e2bb6ed54ae807213d28434394576684187194708bbbf49ca144840f17b: Status 404 returned error can't find the container with id c4955e2bb6ed54ae807213d28434394576684187194708bbbf49ca144840f17b Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.846593390Z" level=info msg="Ran pod sandbox c4955e2bb6ed54ae807213d28434394576684187194708bbbf49ca144840f17b with infra container: openshift-cluster-node-tuning-operator/tuned-zzwb5/POD" id=dcb45c3e-5736-4332-8985-2231f93a6698 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.847449649Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f" id=96cd0a7c-730e-4563-8549-35163cdc6a48 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.847655221Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f not found" id=96cd0a7c-730e-4563-8549-35163cdc6a48 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.848216845Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f" id=c8afca12-40d3-47f2-8e50-1bcd801f944e name=/runtime.v1.ImageService/PullImage Feb 23 17:15:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:54.849634246Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f\"" Feb 23 17:15:54 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.lofeVh.mount: Succeeded. Feb 23 17:15:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:55.195394 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" event=&{ID:a5ccef55-3f5c-4ffc-82f9-586324e62a37 Type:ContainerStarted Data:c4955e2bb6ed54ae807213d28434394576684187194708bbbf49ca144840f17b} Feb 23 17:15:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:55.619385530Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f\"" Feb 23 17:15:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:56.120417 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=07267a40-e316-4a88-91a5-11bc06672f23 path="/var/lib/kubelet/pods/07267a40-e316-4a88-91a5-11bc06672f23/volumes" Feb 23 17:15:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00343|connmgr|INFO|br-int<->unix#2: 309 flow_mods in the 55 s starting 57 s ago (187 adds, 122 deletes) Feb 23 17:15:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00344|connmgr|INFO|br-ex<->unix#1221: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:15:57 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:57.742643292Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0036fbe19ef488958fd96e4d82bab5ce8e78fc0e90f207e9a330bac45cd98017" id=1d7d1d7c-f730-4c61-8962-53fc3ece115c name=/runtime.v1.ImageService/PullImage Feb 23 17:15:57 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:57.744325152Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0036fbe19ef488958fd96e4d82bab5ce8e78fc0e90f207e9a330bac45cd98017" id=c8f27eb1-a694-4c9a-b98c-c1600c865c1d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:57 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:57.745724993Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd7deb981af1dc3a5c5722ad69a4b26930181449d97365ae5cd46e03302fcafe,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:0036fbe19ef488958fd96e4d82bab5ce8e78fc0e90f207e9a330bac45cd98017],Size_:410732435,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c8f27eb1-a694-4c9a-b98c-c1600c865c1d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:15:57 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:57.746448893Z" level=info msg="Creating container: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/hostpath" id=d040610f-bc9f-45ff-87d1-6adfb192490b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:15:57 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:57.746531716Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:15:57 ip-10-0-136-68 systemd[1]: Started crio-conmon-cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237.scope. Feb 23 17:15:57 ip-10-0-136-68 systemd[1]: Started libcontainer container cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237. Feb 23 17:15:57 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:57.900872560Z" level=info msg="Created container cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/hostpath" id=d040610f-bc9f-45ff-87d1-6adfb192490b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:15:57 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:57.901274543Z" level=info msg="Starting container: cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237" id=602527e5-eca7-4dc3-9965-b48e9bd6b271 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:15:57 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:15:57.907892912Z" level=info msg="Started container" PID=61826 containerID=cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237 description=openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/hostpath id=602527e5-eca7-4dc3-9965-b48e9bd6b271 name=/runtime.v1.RuntimeService/StartContainer sandboxID=904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753 Feb 23 17:15:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:58.202746 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" event=&{ID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Type:ContainerStarted Data:cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237} Feb 23 17:15:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:58.414216 2112 plugin_watcher.go:203] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/csi.sharedresource.openshift.io-reg.sock" Feb 23 17:15:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:58.659063 2112 reconciler.go:164] "OperationExecutor.RegisterPlugin started" plugin={SocketPath:/var/lib/kubelet/plugins_registry/csi.sharedresource.openshift.io-reg.sock Timestamp:2023-02-23 17:15:58.414240349 +0000 UTC m=+2593.371404813 Handler: Name:} Feb 23 17:15:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:58.660704 2112 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.sharedresource.openshift.io endpoint: /var/lib/kubelet/plugins/sharedresource.csi.openshift.com/csi.sock versions: 1.0.0 Feb 23 17:15:58 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:15:58.660730 2112 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.sharedresource.openshift.io at endpoint: /var/lib/kubelet/plugins/sharedresource.csi.openshift.com/csi.sock Feb 23 17:15:58 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:15:58.681199 2112 nodeinfomanager.go:561] Invalid attach limit value 0 cannot be added to CSINode object for "csi.sharedresource.openshift.io" Feb 23 17:16:01 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:01.965650 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-console/downloads-6778bfc749-9tkv8] Feb 23 17:16:01 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:01.966018 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:16:01 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podfc25d4db_ca44_4b9b_b5f1_c0bed3abd500.slice. Feb 23 17:16:01 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:01.981033 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-console/downloads-6778bfc749-9tkv8] Feb 23 17:16:02 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:02.154093 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4md7q\" (UniqueName: \"kubernetes.io/projected/fc25d4db-ca44-4b9b-b5f1-c0bed3abd500-kube-api-access-4md7q\") pod \"downloads-6778bfc749-9tkv8\" (UID: \"fc25d4db-ca44-4b9b-b5f1-c0bed3abd500\") " pod="openshift-console/downloads-6778bfc749-9tkv8" Feb 23 17:16:02 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:02.254779 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-4md7q\" (UniqueName: \"kubernetes.io/projected/fc25d4db-ca44-4b9b-b5f1-c0bed3abd500-kube-api-access-4md7q\") pod \"downloads-6778bfc749-9tkv8\" (UID: \"fc25d4db-ca44-4b9b-b5f1-c0bed3abd500\") " pod="openshift-console/downloads-6778bfc749-9tkv8" Feb 23 17:16:02 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:02.284000 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-4md7q\" (UniqueName: \"kubernetes.io/projected/fc25d4db-ca44-4b9b-b5f1-c0bed3abd500-kube-api-access-4md7q\") pod \"downloads-6778bfc749-9tkv8\" (UID: \"fc25d4db-ca44-4b9b-b5f1-c0bed3abd500\") " pod="openshift-console/downloads-6778bfc749-9tkv8" Feb 23 17:16:02 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:02.285420 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6778bfc749-9tkv8" Feb 23 17:16:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:02.285886800Z" level=info msg="Running pod sandbox: openshift-console/downloads-6778bfc749-9tkv8/POD" id=c29c1951-f17f-446b-826e-e0664d489b88 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:16:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:02.287065482Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:16:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:02.970230427Z" level=info msg="Got pod network &{Name:downloads-6778bfc749-9tkv8 Namespace:openshift-console ID:d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161 UID:fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 NetNS:/var/run/netns/fae7580b-c4e0-4b92-b218-6b2956db817e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:16:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:02.970262583Z" level=info msg="Adding pod openshift-console_downloads-6778bfc749-9tkv8 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:16:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:02.990615626Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f" id=c8afca12-40d3-47f2-8e50-1bcd801f944e name=/runtime.v1.ImageService/PullImage Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.004649591Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f" id=d716b352-7b4a-4efc-a97c-12265c3bf3ad name=/runtime.v1.ImageService/ImageStatus Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.007073546Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:664a464be4806e0dadf3ab4d7b46c233cb0d2b068952fe1ff5bc1c75d32b15da,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f],Size_:596526495,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d716b352-7b4a-4efc-a97c-12265c3bf3ad name=/runtime.v1.ImageService/ImageStatus Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.008049990Z" level=info msg="Creating container: openshift-cluster-node-tuning-operator/tuned-zzwb5/tuned" id=80a05686-bae4-433b-98fd-aa351ef1f59e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.008146038Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:16:03 ip-10-0-136-68 systemd[1]: Started crio-conmon-131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394.scope. Feb 23 17:16:03 ip-10-0-136-68 systemd[1]: Started libcontainer container 131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394. Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.112555057Z" level=info msg="Created container 131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394: openshift-cluster-node-tuning-operator/tuned-zzwb5/tuned" id=80a05686-bae4-433b-98fd-aa351ef1f59e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.113279230Z" level=info msg="Starting container: 131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394" id=308b2efc-ae45-4dee-81fd-3fec1e0e1e93 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.121725232Z" level=info msg="Started container" PID=62018 containerID=131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394 description=openshift-cluster-node-tuning-operator/tuned-zzwb5/tuned id=308b2efc-ae45-4dee-81fd-3fec1e0e1e93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4955e2bb6ed54ae807213d28434394576684187194708bbbf49ca144840f17b Feb 23 17:16:03 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): d11bfb33540b641: link is not ready Feb 23 17:16:03 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:16:03 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:16:03 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): d11bfb33540b641: link becomes ready Feb 23 17:16:03 ip-10-0-136-68 systemd-udevd[62050]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:16:03 ip-10-0-136-68 systemd-udevd[62050]: Could not generate persistent MAC address for d11bfb33540b641: No such file or directory Feb 23 17:16:03 ip-10-0-136-68 NetworkManager[1147]: [1677172563.1445] manager: (d11bfb33540b641): new Veth device (/org/freedesktop/NetworkManager/Devices/70) Feb 23 17:16:03 ip-10-0-136-68 NetworkManager[1147]: [1677172563.1458] device (d11bfb33540b641): carrier: link connected Feb 23 17:16:03 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00345|bridge|INFO|bridge br-int: added interface d11bfb33540b641 on port 30 Feb 23 17:16:03 ip-10-0-136-68 NetworkManager[1147]: [1677172563.1824] manager: (d11bfb33540b641): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71) Feb 23 17:16:03 ip-10-0-136-68 kernel: device d11bfb33540b641 entered promiscuous mode Feb 23 17:16:03 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:03.214513 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" event=&{ID:a5ccef55-3f5c-4ffc-82f9-586324e62a37 Type:ContainerStarted Data:131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394} Feb 23 17:16:03 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:03.276407 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-console/downloads-6778bfc749-9tkv8] Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: I0223 17:16:03.128718 61988 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: 2023-02-23T17:16:03Z [verbose] Add: openshift-console:downloads-6778bfc749-9tkv8:fc25d4db-ca44-4b9b-b5f1-c0bed3abd500:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d11bfb33540b641","mac":"72:11:39:c7:8d:3c"},{"name":"eth0","mac":"0a:58:0a:81:02:20","sandbox":"/var/run/netns/fae7580b-c4e0-4b92-b218-6b2956db817e"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.32/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: I0223 17:16:03.231713 61973 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"downloads-6778bfc749-9tkv8", UID:"fc25d4db-ca44-4b9b-b5f1-c0bed3abd500", APIVersion:"v1", ResourceVersion:"71708", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.32/23] from ovn-kubernetes Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.294851444Z" level=info msg="Got pod network &{Name:downloads-6778bfc749-9tkv8 Namespace:openshift-console ID:d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161 UID:fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 NetNS:/var/run/netns/fae7580b-c4e0-4b92-b218-6b2956db817e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.295032117Z" level=info msg="Checking pod openshift-console_downloads-6778bfc749-9tkv8 for CNI network multus-cni-network (type=multus)" Feb 23 17:16:03 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:16:03.298597 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc25d4db_ca44_4b9b_b5f1_c0bed3abd500.slice/crio-d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161.scope WatchSource:0}: Error finding container d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161: Status 404 returned error can't find the container with id d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161 Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.300918791Z" level=info msg="Ran pod sandbox d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161 with infra container: openshift-console/downloads-6778bfc749-9tkv8/POD" id=c29c1951-f17f-446b-826e-e0664d489b88 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.301881282Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:427d064a1fd8134c6b15306521685e76d18516d69350a118d5e04d48362ee91b" id=74e5d103-6090-4e17-8b40-ed2fdca2cb52 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.302086779Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:427d064a1fd8134c6b15306521685e76d18516d69350a118d5e04d48362ee91b not found" id=74e5d103-6090-4e17-8b40-ed2fdca2cb52 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.302699677Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:427d064a1fd8134c6b15306521685e76d18516d69350a118d5e04d48362ee91b" id=aa43061b-0795-449e-84ea-5580c1fdab05 name=/runtime.v1.ImageService/PullImage Feb 23 17:16:03 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:03.304128607Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:427d064a1fd8134c6b15306521685e76d18516d69350a118d5e04d48362ee91b\"" Feb 23 17:16:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:04.217008 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6778bfc749-9tkv8" event=&{ID:fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 Type:ContainerStarted Data:d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161} Feb 23 17:16:05 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:05.932608260Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:427d064a1fd8134c6b15306521685e76d18516d69350a118d5e04d48362ee91b\"" Feb 23 17:16:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:09.752562590Z" level=warning msg="Found defunct process with PID 61957 (haproxy)" Feb 23 17:16:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00346|connmgr|INFO|br-ex<->unix#1226: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:16:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:17.738224489Z" level=warning msg="Found defunct process with PID 61957 (haproxy)" Feb 23 17:16:19 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.GW9Gcf.mount: Succeeded. Feb 23 17:16:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00347|connmgr|INFO|br-ex<->unix#1229: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:16:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00348|connmgr|INFO|br-ex<->unix#1232: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:16:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:22.297185589Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:427d064a1fd8134c6b15306521685e76d18516d69350a118d5e04d48362ee91b" id=aa43061b-0795-449e-84ea-5580c1fdab05 name=/runtime.v1.ImageService/PullImage Feb 23 17:16:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:22.298017249Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:427d064a1fd8134c6b15306521685e76d18516d69350a118d5e04d48362ee91b" id=843baf48-9133-4790-b8d8-c61086e8c806 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:16:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:22.299425673Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2db3af56b8b7191963f2ca74629d31c17af683a4cfb224746141b95358232fe4,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:427d064a1fd8134c6b15306521685e76d18516d69350a118d5e04d48362ee91b],Size_:1454764026,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=843baf48-9133-4790-b8d8-c61086e8c806 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:16:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:22.300483869Z" level=info msg="Creating container: openshift-console/downloads-6778bfc749-9tkv8/download-server" id=c5479a65-f063-41c4-9efe-d55b9daf723f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:16:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:22.300577503Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:16:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508.scope. Feb 23 17:16:22 ip-10-0-136-68 systemd[1]: Started libcontainer container f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508. Feb 23 17:16:22 ip-10-0-136-68 systemd[1]: run-runc-f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508-runc.lFuNeZ.mount: Succeeded. Feb 23 17:16:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:22.894481363Z" level=info msg="Created container f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508: openshift-console/downloads-6778bfc749-9tkv8/download-server" id=c5479a65-f063-41c4-9efe-d55b9daf723f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:16:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:22.894961230Z" level=info msg="Starting container: f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508" id=8075b6ec-8c3e-4e3e-b19e-cf97502284aa name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:16:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:22.903376257Z" level=info msg="Started container" PID=62396 containerID=f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508 description=openshift-console/downloads-6778bfc749-9tkv8/download-server id=8075b6ec-8c3e-4e3e-b19e-cf97502284aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161 Feb 23 17:16:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:23.270118 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6778bfc749-9tkv8" event=&{ID:fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 Type:ContainerStarted Data:f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508} Feb 23 17:16:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:23.270539 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-6778bfc749-9tkv8" Feb 23 17:16:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:23.272214 2112 patch_prober.go:29] interesting pod/downloads-6778bfc749-9tkv8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.129.2.32:8080/\": dial tcp 10.129.2.32:8080: connect: connection refused" start-of-body= Feb 23 17:16:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:23.272257 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6778bfc749-9tkv8" podUID=fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 containerName="download-server" probeResult=failure output="Get \"http://10.129.2.32:8080/\": dial tcp 10.129.2.32:8080: connect: connection refused" Feb 23 17:16:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:24.272955 2112 patch_prober.go:29] interesting pod/downloads-6778bfc749-9tkv8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.129.2.32:8080/\": dial tcp 10.129.2.32:8080: connect: connection refused" start-of-body= Feb 23 17:16:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:24.273025 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6778bfc749-9tkv8" podUID=fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 containerName="download-server" probeResult=failure output="Get \"http://10.129.2.32:8080/\": dial tcp 10.129.2.32:8080: connect: connection refused" Feb 23 17:16:24 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.jcsPX9.mount: Succeeded. Feb 23 17:16:29 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.rClYI4.mount: Succeeded. Feb 23 17:16:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:32.304321 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-6778bfc749-9tkv8" Feb 23 17:16:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00349|connmgr|INFO|br-ex<->unix#1240: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:16:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:39.749684769Z" level=warning msg="Found defunct process with PID 62575 (haproxy)" Feb 23 17:16:39 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:39.749766369Z" level=warning msg="Found defunct process with PID 62652 (haproxy)" Feb 23 17:16:44 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.QIdicd.mount: Succeeded. Feb 23 17:16:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:46.201530654Z" level=info msg="Stopping pod sandbox: 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05" id=f9ffd608-522f-45cc-8706-e980f0968c95 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:16:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:46.201570756Z" level=info msg="Stopped pod sandbox (already stopped): 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05" id=f9ffd608-522f-45cc-8706-e980f0968c95 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:16:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:46.201813754Z" level=info msg="Removing pod sandbox: 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05" id=b9e35e2f-9264-479e-91ae-1c5d3757f245 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:16:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:46.210893351Z" level=info msg="Removed pod sandbox: 528a7b04bf0d754b13168dfd2c703b64d50739ec96809ffbed568e329bf1df05" id=b9e35e2f-9264-479e-91ae-1c5d3757f245 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:16:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:16:46.212093 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4\": container with ID starting with 2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4 not found: ID does not exist" containerID="2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4" Feb 23 17:16:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:16:46.212128 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4" err="rpc error: code = NotFound desc = could not find container \"2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4\": container with ID starting with 2b26cbfc007385f29997882b899af5fbb6cc332ce1ef7773d98623c961bfacf4 not found: ID does not exist" Feb 23 17:16:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:16:47.737630627Z" level=warning msg="Found defunct process with PID 62652 (haproxy)" Feb 23 17:16:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00350|connmgr|INFO|br-ex<->unix#1245: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:16:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00351|connmgr|INFO|br-int<->unix#2: 213 flow_mods in the 53 s starting 58 s ago (126 adds, 87 deletes) Feb 23 17:17:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00352|connmgr|INFO|br-ex<->unix#1253: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:17:20 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.jehXRK.mount: Succeeded. Feb 23 17:17:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00353|connmgr|INFO|br-ex<->unix#1258: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:17:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00354|connmgr|INFO|br-ex<->unix#1266: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:17:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:17:46.015551767Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=50c65e44-0deb-4aaa-ac36-3219cb7f51a7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:17:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:17:46.015771273Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=50c65e44-0deb-4aaa-ac36-3219cb7f51a7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:17:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00355|connmgr|INFO|br-ex<->unix#1271: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:17:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00356|connmgr|INFO|br-int<->unix#2: 18 flow_mods in the 39 s starting 59 s ago (11 adds, 7 deletes) Feb 23 17:18:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00357|connmgr|INFO|br-ex<->unix#1280: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:18:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00358|connmgr|INFO|br-ex<->unix#1285: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:18:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00359|connmgr|INFO|br-ex<->unix#1293: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:18:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00360|connmgr|INFO|br-ex<->unix#1298: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:18:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00361|connmgr|INFO|br-int<->unix#2: 27 flow_mods in the 2 s starting 12 s ago (11 adds, 16 deletes) Feb 23 17:19:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00362|connmgr|INFO|br-ex<->unix#1306: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:19:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00363|connmgr|INFO|br-ex<->unix#1311: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:19:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00364|connmgr|INFO|br-ex<->unix#1319: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:19:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:45.770904 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4] Feb 23 17:19:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:45.771103 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" podUID=6d75c369-887c-42d2-94c1-40cd36f882c3 containerName="csi-driver" containerID="cri-o://c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2" gracePeriod=30 Feb 23 17:19:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:45.771122 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" podUID=6d75c369-887c-42d2-94c1-40cd36f882c3 containerName="csi-liveness-probe" containerID="cri-o://37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883" gracePeriod=30 Feb 23 17:19:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:45.771524169Z" level=info msg="Stopping container: 37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883 (timeout: 30s)" id=569a4752-081b-4ef3-9c31-3ab406c7ec43 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:19:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:45.771530969Z" level=info msg="Stopping container: c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2 (timeout: 30s)" id=cc1df510-2ebf-4ab0-8acf-151c05526c56 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:19:45 ip-10-0-136-68 conmon[3534]: conmon 37a5cebc68b5368861f3 : container 3559 exited with status 2 Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883.scope: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883.scope: Consumed 472ms CPU time Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-conmon-37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883.scope: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-conmon-37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883.scope: Consumed 26ms CPU time Feb 23 17:19:45 ip-10-0-136-68 conmon[2780]: conmon c3188b0a36fa50f052a0 : container 2865 exited with status 2 Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2.scope: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2.scope: Consumed 354ms CPU time Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-conmon-c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2.scope: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-conmon-c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2.scope: Consumed 25ms CPU time Feb 23 17:19:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:45.816917 2112 plugin_watcher.go:215] "Removing socket path from desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Feb 23 17:19:45 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:45.818007 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" podUID=6d75c369-887c-42d2-94c1-40cd36f882c3 containerName="csi-node-driver-registrar" containerID="cri-o://7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511" gracePeriod=30 Feb 23 17:19:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:45.818217089Z" level=info msg="Stopping container: 7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511 (timeout: 30s)" id=e25be340-2320-4699-8702-9b0926504cc2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511.scope: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511.scope: Consumed 95ms CPU time Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-conmon-7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511.scope: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: crio-conmon-7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511.scope: Consumed 25ms CPU time Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-92243ebf60458b4607b3a9c07ca6eeea6fe9efe64c0d2305fcc353450ab1cd04-merged.mount: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-92243ebf60458b4607b3a9c07ca6eeea6fe9efe64c0d2305fcc353450ab1cd04-merged.mount: Consumed 0 CPU time Feb 23 17:19:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:45.926377006Z" level=info msg="Stopped container 37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-liveness-probe" id=569a4752-081b-4ef3-9c31-3ab406c7ec43 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-dfd84aaa07ea8b938efa3e462dc39c174b80d22bfdaa67a21ee23e20b661a6b7-merged.mount: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-dfd84aaa07ea8b938efa3e462dc39c174b80d22bfdaa67a21ee23e20b661a6b7-merged.mount: Consumed 0 CPU time Feb 23 17:19:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:45.947437104Z" level=info msg="Stopped container c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-driver" id=cc1df510-2ebf-4ab0-8acf-151c05526c56 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-badf68ff44c54845e4306db5ccc7daf46b53270b6c7b433844369d89d2e7b817-merged.mount: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-badf68ff44c54845e4306db5ccc7daf46b53270b6c7b433844369d89d2e7b817-merged.mount: Consumed 0 CPU time Feb 23 17:19:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:45.978544958Z" level=info msg="Stopped container 7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-node-driver-registrar" id=e25be340-2320-4699-8702-9b0926504cc2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:19:45 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:45.978936077Z" level=info msg="Stopping pod sandbox: 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16" id=34910c69-cfc7-4a35-9b39-6b32044d2059 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e0577252ec16141d4cb58e64ea8d9fc0d638b505d474306ed4789a4eb669e0fd-merged.mount: Succeeded. Feb 23 17:19:45 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e0577252ec16141d4cb58e64ea8d9fc0d638b505d474306ed4789a4eb669e0fd-merged.mount: Consumed 0 CPU time Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.103717166Z" level=info msg="Stopped pod sandbox: 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16" id=34910c69-cfc7-4a35-9b39-6b32044d2059 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.219480 2112 scope.go:115] "RemoveContainer" containerID="7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511" Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.220166495Z" level=info msg="Removing container: 7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511" id=2b2b39aa-092b-4514-8307-ddb2b4a4bd5b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.238272852Z" level=info msg="Removed container 7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-node-driver-registrar" id=2b2b39aa-092b-4514-8307-ddb2b4a4bd5b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.238415 2112 scope.go:115] "RemoveContainer" containerID="c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2" Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.239115634Z" level=info msg="Removing container: c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2" id=104b09f9-01e2-4b54-b6f6-8e45c91f69c2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.258081605Z" level=info msg="Removed container c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-driver" id=104b09f9-01e2-4b54-b6f6-8e45c91f69c2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.258302 2112 scope.go:115] "RemoveContainer" containerID="37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883" Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.258932286Z" level=info msg="Removing container: 37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883" id=bcf06051-3644-4aad-8a5b-2c5fb0e96632 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263352 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir\") pod \"6d75c369-887c-42d2-94c1-40cd36f882c3\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263386 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir\") pod \"6d75c369-887c-42d2-94c1-40cd36f882c3\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263404 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir\") pod \"6d75c369-887c-42d2-94c1-40cd36f882c3\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263400 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6d75c369-887c-42d2-94c1-40cd36f882c3" (UID: "6d75c369-887c-42d2-94c1-40cd36f882c3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263422 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir\") pod \"6d75c369-887c-42d2-94c1-40cd36f882c3\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263440 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir" (OuterVolumeSpecName: "device-dir") pod "6d75c369-887c-42d2-94c1-40cd36f882c3" (UID: "6d75c369-887c-42d2-94c1-40cd36f882c3"). InnerVolumeSpecName "device-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263449 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhxvk\" (UniqueName: \"kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk\") pod \"6d75c369-887c-42d2-94c1-40cd36f882c3\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263463 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir" (OuterVolumeSpecName: "registration-dir") pod "6d75c369-887c-42d2-94c1-40cd36f882c3" (UID: "6d75c369-887c-42d2-94c1-40cd36f882c3"). InnerVolumeSpecName "registration-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263478 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle\") pod \"6d75c369-887c-42d2-94c1-40cd36f882c3\" (UID: \"6d75c369-887c-42d2-94c1-40cd36f882c3\") " Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263485 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir" (OuterVolumeSpecName: "plugin-dir") pod "6d75c369-887c-42d2-94c1-40cd36f882c3" (UID: "6d75c369-887c-42d2-94c1-40cd36f882c3"). InnerVolumeSpecName "plugin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263557 2112 reconciler.go:399] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-kubelet-dir\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263567 2112 reconciler.go:399] "Volume detached for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-device-dir\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263577 2112 reconciler.go:399] "Volume detached for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-registration-dir\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263585 2112 reconciler.go:399] "Volume detached for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/6d75c369-887c-42d2-94c1-40cd36f882c3-plugin-dir\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:19:46.263743 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6d75c369-887c-42d2-94c1-40cd36f882c3/volumes/kubernetes.io~configmap/non-standard-root-system-trust-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.263928 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle" (OuterVolumeSpecName: "non-standard-root-system-trust-ca-bundle") pod "6d75c369-887c-42d2-94c1-40cd36f882c3" (UID: "6d75c369-887c-42d2-94c1-40cd36f882c3"). InnerVolumeSpecName "non-standard-root-system-trust-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.275873 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk" (OuterVolumeSpecName: "kube-api-access-xhxvk") pod "6d75c369-887c-42d2-94c1-40cd36f882c3" (UID: "6d75c369-887c-42d2-94c1-40cd36f882c3"). InnerVolumeSpecName "kube-api-access-xhxvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.276113028Z" level=info msg="Removed container 37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4/csi-liveness-probe" id=bcf06051-3644-4aad-8a5b-2c5fb0e96632 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.277117747Z" level=info msg="Stopping pod sandbox: 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16" id=fb200303-831c-4f6f-abbd-a577cc68ad00 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.277149261Z" level=info msg="Stopped pod sandbox (already stopped): 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16" id=fb200303-831c-4f6f-abbd-a577cc68ad00 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.277329865Z" level=info msg="Removing pod sandbox: 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16" id=a12ff2b0-bae9-4a05-ad53-624f89f4d658 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:19:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:46.285769565Z" level=info msg="Removed pod sandbox: 7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16" id=a12ff2b0-bae9-4a05-ad53-624f89f4d658 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:19:46.286642 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d\": container with ID starting with 0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d not found: ID does not exist" containerID="0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.286689 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d" err="rpc error: code = NotFound desc = could not find container \"0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d\": container with ID starting with 0d96ff9d4729d98a0282336bb6314c646dd47938255774920eae33e5bda1f10d not found: ID does not exist" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:19:46.286922 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854\": container with ID starting with 9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854 not found: ID does not exist" containerID="9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.286948 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854" err="rpc error: code = NotFound desc = could not find container \"9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854\": container with ID starting with 9cacd2022bb32e91792019f864eb5ec430ac26b4d3f2223bca19e54ed2cc3854 not found: ID does not exist" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:19:46.287147 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3\": container with ID starting with 35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3 not found: ID does not exist" containerID="35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.287169 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3" err="rpc error: code = NotFound desc = could not find container \"35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3\": container with ID starting with 35ef8eadfa1d96a217874ea7aa4a8d3d2dbdf7cd34bb8073fd6cfb47078dd3b3 not found: ID does not exist" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.364118 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-xhxvk\" (UniqueName: \"kubernetes.io/projected/6d75c369-887c-42d2-94c1-40cd36f882c3-kube-api-access-xhxvk\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.364139 2112 reconciler.go:399] "Volume detached for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d75c369-887c-42d2-94c1-40cd36f882c3-non-standard-root-system-trust-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.662112 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerDied Data:37a5cebc68b5368861f3f1a0d0160b179ec7a163adae4630de7d03c4ae70a883} Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.662153 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerDied Data:7a19d4cdc406d3bfeb311cefe8957681687f6be4397687ff4be7ba32e31ad511} Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.662170 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerDied Data:c3188b0a36fa50f052a0149071dc5c4b600fc54ba5d23bbc6cbcb7dd39c833a2} Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.662185 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4" event=&{ID:6d75c369-887c-42d2-94c1-40cd36f882c3 Type:ContainerDied Data:7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16} Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod6d75c369_887c_42d2_94c1_40cd36f882c3.slice. Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod6d75c369_887c_42d2_94c1_40cd36f882c3.slice: Consumed 1.001s CPU time Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.684294 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4] Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.695762 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5hqp4] Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.738535 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7] Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.738567 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:19:46.738617 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-driver" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.738626 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-driver" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:19:46.738634 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-liveness-probe" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.738640 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-liveness-probe" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:19:46.738647 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-node-driver-registrar" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.738651 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-node-driver-registrar" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.738713 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-liveness-probe" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.738722 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-node-driver-registrar" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.738728 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="6d75c369-887c-42d2-94c1-40cd36f882c3" containerName="csi-driver" Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod0976617f_18ed_4a73_a7d8_ac54cf69ab93.slice. Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: run-netns-87607df5\x2d8ea2\x2d49c2\x2d9c4b\x2ddf31fd2d648a.mount: Succeeded. Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: run-netns-87607df5\x2d8ea2\x2d49c2\x2d9c4b\x2ddf31fd2d648a.mount: Consumed 0 CPU time Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: run-ipcns-87607df5\x2d8ea2\x2d49c2\x2d9c4b\x2ddf31fd2d648a.mount: Succeeded. Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: run-ipcns-87607df5\x2d8ea2\x2d49c2\x2d9c4b\x2ddf31fd2d648a.mount: Consumed 0 CPU time Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: run-utsns-87607df5\x2d8ea2\x2d49c2\x2d9c4b\x2ddf31fd2d648a.mount: Succeeded. Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: run-utsns-87607df5\x2d8ea2\x2d49c2\x2d9c4b\x2ddf31fd2d648a.mount: Consumed 0 CPU time Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16-userdata-shm.mount: Succeeded. Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7c2a5826b36e46e65e450ff07d3bed04375fc5b7b42476ff07959d461d58ab16-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6d75c369\x2d887c\x2d42d2\x2d94c1\x2d40cd36f882c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhxvk.mount: Succeeded. Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.808074 2112 reconciler.go:147] "OperationExecutor.UnregisterPlugin started" plugin={SocketPath:/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock Timestamp:2023-02-23 16:32:50.958874752 +0000 UTC m=+5.916039213 Handler:0x7399840 Name:ebs.csi.aws.com} Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.808107 2112 csi_plugin.go:178] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin ebs.csi.aws.com Feb 23 17:19:46 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-6d75c369\x2d887c\x2d42d2\x2d94c1\x2d40cd36f882c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhxvk.mount: Consumed 0 CPU time Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.867291 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6xs2\" (UniqueName: \"kubernetes.io/projected/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kube-api-access-r6xs2\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.867361 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0976617f-18ed-4a73-a7d8-ac54cf69ab93-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.867387 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-device-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.867412 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-etc-selinux\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.867456 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-registration-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.867480 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-sys-fs\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.867503 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-plugin-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.867526 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kubelet-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968650 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0976617f-18ed-4a73-a7d8-ac54cf69ab93-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968723 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-device-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968765 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-etc-selinux\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968856 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-etc-selinux\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968866 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-device-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968903 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-registration-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968931 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-sys-fs\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968957 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-plugin-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.968989 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kubelet-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.969039 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-r6xs2\" (UniqueName: \"kubernetes.io/projected/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kube-api-access-r6xs2\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.969121 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-registration-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.969176 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-plugin-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.969222 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-sys-fs\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.969265 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kubelet-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.969451 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0976617f-18ed-4a73-a7d8-ac54cf69ab93-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:46 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:46.983932 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6xs2\" (UniqueName: \"kubernetes.io/projected/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kube-api-access-r6xs2\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:47.051824 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:19:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:47.052285433Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/POD" id=d89507a6-269b-4f08-8ec2-e263c1e58778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:19:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:47.052344013Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:19:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:47.069086695Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=d89507a6-269b-4f08-8ec2-e263c1e58778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:19:47 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:19:47.072336 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0976617f_18ed_4a73_a7d8_ac54cf69ab93.slice/crio-19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5.scope WatchSource:0}: Error finding container 19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5: Status 404 returned error can't find the container with id 19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5 Feb 23 17:19:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:47.074173678Z" level=info msg="Ran pod sandbox 19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5 with infra container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/POD" id=d89507a6-269b-4f08-8ec2-e263c1e58778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:19:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:47.074887537Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=1c39edf0-c54a-4c28-afb9-70986204493a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:47.075052344Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605 not found" id=1c39edf0-c54a-4c28-afb9-70986204493a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:47.075349 2112 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:19:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:47.075599298Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=ef420d48-b36c-41bb-85fd-8cf1e01d93fb name=/runtime.v1.ImageService/PullImage Feb 23 17:19:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:47.163518314Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605\"" Feb 23 17:19:47 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:47.664447 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5} Feb 23 17:19:48 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:48.120240 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6d75c369-887c-42d2-94c1-40cd36f882c3 path="/var/lib/kubelet/pods/6d75c369-887c-42d2-94c1-40cd36f882c3/volumes" Feb 23 17:19:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:48.324141558Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605\"" Feb 23 17:19:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00365|connmgr|INFO|br-ex<->unix#1324: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:19:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:53.994354721Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=ef420d48-b36c-41bb-85fd-8cf1e01d93fb name=/runtime.v1.ImageService/PullImage Feb 23 17:19:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:53.995150488Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=8ccab44d-cfdb-404e-8aaf-d17684aaa2f9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:53.996351334Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8ccab44d-cfdb-404e-8aaf-d17684aaa2f9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:53.997083048Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=57fc80a6-ba0c-4168-8ef0-0788084d6cd1 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:19:53 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:53.997167175Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:19:54 ip-10-0-136-68 systemd[1]: Started crio-conmon-402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a.scope. Feb 23 17:19:54 ip-10-0-136-68 systemd[1]: Started libcontainer container 402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a. Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.199281617Z" level=info msg="Created container 402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=57fc80a6-ba0c-4168-8ef0-0788084d6cd1 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.199766670Z" level=info msg="Starting container: 402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a" id=f161ef7f-86ee-4faa-ba71-53ff6b865ea1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.206715524Z" level=info msg="Started container" PID=65272 containerID=402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=f161ef7f-86ee-4faa-ba71-53ff6b865ea1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5 Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.216461140Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=d6aae409-5622-4b8f-a598-ec68836199eb name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.216609780Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ae90363e0687fc12bc8ed8a2a77d165dc67626c1a60ee8d602e0319b2f949960,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7],Size_:368500613,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d6aae409-5622-4b8f-a598-ec68836199eb name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.219026096Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=b459d13a-e7f9-4dc1-97a0-8b92d36bb97f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.219169566Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ae90363e0687fc12bc8ed8a2a77d165dc67626c1a60ee8d602e0319b2f949960,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7],Size_:368500613,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b459d13a-e7f9-4dc1-97a0-8b92d36bb97f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.219881620Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-node-driver-registrar" id=89f24f4f-78d9-4e36-8354-955b2badb1fc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.219990073Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:19:54 ip-10-0-136-68 systemd[1]: Started crio-conmon-f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4.scope. Feb 23 17:19:54 ip-10-0-136-68 systemd[1]: Started libcontainer container f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4. Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.360192131Z" level=info msg="Created container f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-node-driver-registrar" id=89f24f4f-78d9-4e36-8354-955b2badb1fc name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.360582247Z" level=info msg="Starting container: f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4" id=372b7312-8b50-4d43-a96b-557318c38098 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.367500567Z" level=info msg="Started container" PID=65315 containerID=f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-node-driver-registrar id=372b7312-8b50-4d43-a96b-557318c38098 name=/runtime.v1.RuntimeService/StartContainer sandboxID=19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5 Feb 23 17:19:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:54.374559 2112 plugin_watcher.go:203] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.377540662Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986" id=118519b5-c98a-45f0-9418-a49e982efa6c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.377804264Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986 not found" id=118519b5-c98a-45f0-9418-a49e982efa6c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.378304037Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986" id=81fe1670-eb56-4839-b64b-01e5078f3a4c name=/runtime.v1.ImageService/PullImage Feb 23 17:19:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:54.379081292Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986\"" Feb 23 17:19:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:54.681173 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4} Feb 23 17:19:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:54.681418 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a} Feb 23 17:19:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:54.812772 2112 reconciler.go:164] "OperationExecutor.RegisterPlugin started" plugin={SocketPath:/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock Timestamp:2023-02-23 17:19:54.374578813 +0000 UTC m=+2829.331743290 Handler: Name:} Feb 23 17:19:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:54.815180 2112 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Feb 23 17:19:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:19:54.815230 2112 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Feb 23 17:19:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:19:55.867085603Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986\"" Feb 23 17:19:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00366|connmgr|INFO|br-int<->unix#2: 52 flow_mods in the 2 s starting 31 s ago (26 adds, 26 deletes) Feb 23 17:20:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:00.978731164Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986" id=81fe1670-eb56-4839-b64b-01e5078f3a4c name=/runtime.v1.ImageService/PullImage Feb 23 17:20:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:00.979507533Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986" id=b72090d8-3759-4f72-be93-0fc119193f23 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:00.980742780Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e58f76855491f5bce249b50904350a7a43dfb3161623bf950b71fe1b27cf5b01,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986],Size_:366474395,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b72090d8-3759-4f72-be93-0fc119193f23 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:00.981382095Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-liveness-probe" id=3928c490-4bad-4b48-bc73-14713ef4ace8 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:00.981455022Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:01 ip-10-0-136-68 systemd[1]: Started crio-conmon-edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18.scope. Feb 23 17:20:01 ip-10-0-136-68 systemd[1]: Started libcontainer container edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18. Feb 23 17:20:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:01.164476077Z" level=info msg="Created container edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-liveness-probe" id=3928c490-4bad-4b48-bc73-14713ef4ace8 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:01.164937200Z" level=info msg="Starting container: edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18" id=d4c667f0-6da7-4104-bd0d-7e7df0e8a5ce name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:20:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:01.184043318Z" level=info msg="Started container" PID=65493 containerID=edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-liveness-probe id=d4c667f0-6da7-4104-bd0d-7e7df0e8a5ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5 Feb 23 17:20:01 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:01.701323 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18} Feb 23 17:20:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00367|connmgr|INFO|br-ex<->unix#1332: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:20:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00368|connmgr|INFO|br-ex<->unix#1337: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:20:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:22.200953 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/multus-additional-cni-plugins-p9nj2] Feb 23 17:20:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:22.745514 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" podUID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerName="kube-multus-additional-cni-plugins" containerID="cri-o://638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5" gracePeriod=10 Feb 23 17:20:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:22.745774236Z" level=info msg="Stopping container: 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5 (timeout: 10s)" id=bb4f3eb4-e0dc-41b3-b535-173e844c4d0e name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:22 ip-10-0-136-68 conmon[5083]: conmon 638ba010408e61c882c6 : container 5102 exited with status 143 Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: crio-638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5.scope: Succeeded. Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: crio-638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5.scope: Consumed 25ms CPU time Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: crio-conmon-638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5.scope: Succeeded. Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: crio-conmon-638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5.scope: Consumed 23ms CPU time Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-404e58b57dbb605b496937520641078c5aa689efba9e2a14c99bd4d5ecf68047-merged.mount: Succeeded. Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-404e58b57dbb605b496937520641078c5aa689efba9e2a14c99bd4d5ecf68047-merged.mount: Consumed 0 CPU time Feb 23 17:20:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:22.938917400Z" level=info msg="Stopped container 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5: openshift-multus/multus-additional-cni-plugins-p9nj2/kube-multus-additional-cni-plugins" id=bb4f3eb4-e0dc-41b3-b535-173e844c4d0e name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:22.939359538Z" level=info msg="Stopping pod sandbox: 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962" id=1944755c-5520-4ea0-b86a-73a99f29093c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d8156808986af1475f630909d134724b2bc719ad76631e5dee053ef9c1ef1baa-merged.mount: Succeeded. Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d8156808986af1475f630909d134724b2bc719ad76631e5dee053ef9c1ef1baa-merged.mount: Consumed 0 CPU time Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: run-utsns-c0daec5b\x2dd0f8\x2d4089\x2d9ad4\x2d0e7af4d081cb.mount: Succeeded. Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: run-utsns-c0daec5b\x2dd0f8\x2d4089\x2d9ad4\x2d0e7af4d081cb.mount: Consumed 0 CPU time Feb 23 17:20:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:22.984360 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/network-metrics-daemon-5hc5d] Feb 23 17:20:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:22.984526 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d containerName="network-metrics-daemon" containerID="cri-o://04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4" gracePeriod=30 Feb 23 17:20:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:22.984609 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d containerName="kube-rbac-proxy" containerID="cri-o://264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a" gracePeriod=30 Feb 23 17:20:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:22.984818773Z" level=info msg="Stopping container: 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a (timeout: 30s)" id=7b68d147-bc5a-494e-a583-df2f21c5f172 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:22.984835629Z" level=info msg="Stopping container: 04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4 (timeout: 30s)" id=19d5381c-e440-417d-a8b1-9cbec639d9e1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: run-ipcns-c0daec5b\x2dd0f8\x2d4089\x2d9ad4\x2d0e7af4d081cb.mount: Succeeded. Feb 23 17:20:22 ip-10-0-136-68 systemd[1]: run-ipcns-c0daec5b\x2dd0f8\x2d4089\x2d9ad4\x2d0e7af4d081cb.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: crio-04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4.scope: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: crio-04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4.scope: Consumed 1.298s CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: run-netns-c0daec5b\x2dd0f8\x2d4089\x2d9ad4\x2d0e7af4d081cb.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: run-netns-c0daec5b\x2dd0f8\x2d4089\x2d9ad4\x2d0e7af4d081cb.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: crio-conmon-04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4.scope: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: crio-conmon-04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4.scope: Consumed 28ms CPU time Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.024716386Z" level=info msg="Stopped pod sandbox: 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962" id=1944755c-5520-4ea0-b86a-73a99f29093c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.160916088Z" level=info msg="Stopped container 04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4: openshift-multus/network-metrics-daemon-5hc5d/network-metrics-daemon" id=19d5381c-e440-417d-a8b1-9cbec639d9e1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.232699 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr2sj\" (UniqueName: \"kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj\") pod \"2c47bc3e-0247-4d47-80e3-c168262e7976\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.232748 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist\") pod \"2c47bc3e-0247-4d47-80e3-c168262e7976\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.232777 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir\") pod \"2c47bc3e-0247-4d47-80e3-c168262e7976\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.232804 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir\") pod \"2c47bc3e-0247-4d47-80e3-c168262e7976\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.232825 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin\") pod \"2c47bc3e-0247-4d47-80e3-c168262e7976\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.232844 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy\") pod \"2c47bc3e-0247-4d47-80e3-c168262e7976\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.232860 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release\") pod \"2c47bc3e-0247-4d47-80e3-c168262e7976\" (UID: \"2c47bc3e-0247-4d47-80e3-c168262e7976\") " Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:20:23.232992 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/2c47bc3e-0247-4d47-80e3-c168262e7976/volumes/kubernetes.io~configmap/cni-sysctl-allowlist: clearQuota called, but quotas disabled Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.233227 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "2c47bc3e-0247-4d47-80e3-c168262e7976" (UID: "2c47bc3e-0247-4d47-80e3-c168262e7976"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.233265 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin" (OuterVolumeSpecName: "cnibin") pod "2c47bc3e-0247-4d47-80e3-c168262e7976" (UID: "2c47bc3e-0247-4d47-80e3-c168262e7976"). InnerVolumeSpecName "cnibin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.233288 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "2c47bc3e-0247-4d47-80e3-c168262e7976" (UID: "2c47bc3e-0247-4d47-80e3-c168262e7976"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.233310 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir" (OuterVolumeSpecName: "system-cni-dir") pod "2c47bc3e-0247-4d47-80e3-c168262e7976" (UID: "2c47bc3e-0247-4d47-80e3-c168262e7976"). InnerVolumeSpecName "system-cni-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.233333 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release" (OuterVolumeSpecName: "os-release") pod "2c47bc3e-0247-4d47-80e3-c168262e7976" (UID: "2c47bc3e-0247-4d47-80e3-c168262e7976"). InnerVolumeSpecName "os-release". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:20:23.233371 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/2c47bc3e-0247-4d47-80e3-c168262e7976/volumes/kubernetes.io~configmap/cni-binary-copy: clearQuota called, but quotas disabled Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.233586 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "2c47bc3e-0247-4d47-80e3-c168262e7976" (UID: "2c47bc3e-0247-4d47-80e3-c168262e7976"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.256843 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj" (OuterVolumeSpecName: "kube-api-access-hr2sj") pod "2c47bc3e-0247-4d47-80e3-c168262e7976" (UID: "2c47bc3e-0247-4d47-80e3-c168262e7976"). InnerVolumeSpecName "kube-api-access-hr2sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.333398 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-hr2sj\" (UniqueName: \"kubernetes.io/projected/2c47bc3e-0247-4d47-80e3-c168262e7976-kube-api-access-hr2sj\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.333428 2112 reconciler.go:399] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-sysctl-allowlist\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.333439 2112 reconciler.go:399] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-tuning-conf-dir\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.333448 2112 reconciler.go:399] "Volume detached for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-system-cni-dir\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.333461 2112 reconciler.go:399] "Volume detached for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-cnibin\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.333471 2112 reconciler.go:399] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c47bc3e-0247-4d47-80e3-c168262e7976-cni-binary-copy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.333480 2112 reconciler.go:399] "Volume detached for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c47bc3e-0247-4d47-80e3-c168262e7976-os-release\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.748325 2112 generic.go:296] "Generic (PLEG): container finished" podID=9cd26ba5-46e4-40b5-81e6-74079153d58d containerID="04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4" exitCode=0 Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.748375 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerDied Data:04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4} Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.750236 2112 generic.go:296] "Generic (PLEG): container finished" podID=2c47bc3e-0247-4d47-80e3-c168262e7976 containerID="638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5" exitCode=143 Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.750268 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5} Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.750286 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9nj2" event=&{ID:2c47bc3e-0247-4d47-80e3-c168262e7976 Type:ContainerDied Data:9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962} Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.750299 2112 scope.go:115] "RemoveContainer" containerID="638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5" Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.751120461Z" level=info msg="Removing container: 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5" id=11da1122-746b-44f4-b90f-1b2eaa3b1b82 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod2c47bc3e_0247_4d47_80e3_c168262e7976.slice. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod2c47bc3e_0247_4d47_80e3_c168262e7976.slice: Consumed 561ms CPU time Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.773619492Z" level=info msg="Removed container 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5: openshift-multus/multus-additional-cni-plugins-p9nj2/kube-multus-additional-cni-plugins" id=11da1122-746b-44f4-b90f-1b2eaa3b1b82 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.773859 2112 scope.go:115] "RemoveContainer" containerID="70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16" Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.774611285Z" level=info msg="Removing container: 70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16" id=0f1d601a-327e-4214-8a67-73601526b1b4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.774983 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/multus-additional-cni-plugins-p9nj2] Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.779848 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/multus-additional-cni-plugins-p9nj2] Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.801851 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/multus-additional-cni-plugins-nqwsg] Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.801898 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:23.801972 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="whereabouts-cni" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.801984 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="whereabouts-cni" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:23.801998 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="cni-plugins" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.802006 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="cni-plugins" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:23.802016 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="whereabouts-cni-bincopy" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.802024 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="whereabouts-cni-bincopy" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:23.802033 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="bond-cni-plugin" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.802040 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="bond-cni-plugin" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:23.802049 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="kube-multus-additional-cni-plugins" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.802057 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="kube-multus-additional-cni-plugins" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:23.802069 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="routeoverride-cni" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.802078 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="routeoverride-cni" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:23.802089 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="egress-router-binary-copy" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.802096 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="egress-router-binary-copy" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.802152 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="2c47bc3e-0247-4d47-80e3-c168262e7976" containerName="kube-multus-additional-cni-plugins" Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod7f25c5a9_b9c7_4220_a892_362cf6b33878.slice. Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.814732343Z" level=info msg="Removed container 70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni" id=0f1d601a-327e-4214-8a67-73601526b1b4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.815009 2112 scope.go:115] "RemoveContainer" containerID="535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038" Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.815788657Z" level=info msg="Removing container: 535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038" id=bdee8eaa-1c18-4238-9adb-c8581769b812 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.870026196Z" level=info msg="Removed container 535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038: openshift-multus/multus-additional-cni-plugins-p9nj2/whereabouts-cni-bincopy" id=bdee8eaa-1c18-4238-9adb-c8581769b812 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.870243 2112 scope.go:115] "RemoveContainer" containerID="8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122" Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.871019147Z" level=info msg="Removing container: 8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122" id=c704ef3c-2f2b-4b78-95b5-f0f4f05f81b7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.912956477Z" level=info msg="Removed container 8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122: openshift-multus/multus-additional-cni-plugins-p9nj2/routeoverride-cni" id=c704ef3c-2f2b-4b78-95b5-f0f4f05f81b7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.913181 2112 scope.go:115] "RemoveContainer" containerID="71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2" Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.914369199Z" level=info msg="Removing container: 71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2" id=3047897b-1a09-4640-ac1d-96fb4fa126da name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0dece7bb88a6be7d90af37987eeaafc1d34d03da7a7323a5ad0b25de9a5f78dc-merged.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0dece7bb88a6be7d90af37987eeaafc1d34d03da7a7323a5ad0b25de9a5f78dc-merged.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d55bc9fa65349f6022edfb455fd8f61d062af1bcd63e67b657aae6f1b53e48a0-merged.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d55bc9fa65349f6022edfb455fd8f61d062af1bcd63e67b657aae6f1b53e48a0-merged.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4e7f55f4cf022ecbdb615ad7e51fd7a32542a7d7bc7bdeecf70ba295f03d3713-merged.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4e7f55f4cf022ecbdb615ad7e51fd7a32542a7d7bc7bdeecf70ba295f03d3713-merged.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f33bd05dbfd830daabe7f39044871f37f9fcccd369f7055ff918b19fd9968af3-merged.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f33bd05dbfd830daabe7f39044871f37f9fcccd369f7055ff918b19fd9968af3-merged.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962-userdata-shm.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-2c47bc3e\x2d0247\x2d4d47\x2d80e3\x2dc168262e7976-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr2sj.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-2c47bc3e\x2d0247\x2d4d47\x2d80e3\x2dc168262e7976-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr2sj.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2ed7cd0b2afa13fb3b0070ecb098dec7f50f9fadebcd74527e493de51f5a8173-merged.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2ed7cd0b2afa13fb3b0070ecb098dec7f50f9fadebcd74527e493de51f5a8173-merged.mount: Consumed 0 CPU time Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.936884 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-system-cni-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.936937 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-os-release\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.937078 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-cnibin\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.937116 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.937146 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.937173 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.937200 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22vqh\" (UniqueName: \"kubernetes.io/projected/7f25c5a9-b9c7-4220-a892-362cf6b33878-kube-api-access-22vqh\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.956343246Z" level=info msg="Removed container 71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2: openshift-multus/multus-additional-cni-plugins-p9nj2/bond-cni-plugin" id=3047897b-1a09-4640-ac1d-96fb4fa126da name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:23.956537 2112 scope.go:115] "RemoveContainer" containerID="8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd" Feb 23 17:20:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:23.957272341Z" level=info msg="Removing container: 8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd" id=df2060e0-f413-46c2-93d9-c7bec3ac3ce0 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3e9f2e9abc17cdf4e213cf686d417ccdcf29ef1fa36c053e4a25d6e4ca8934ea-merged.mount: Succeeded. Feb 23 17:20:23 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3e9f2e9abc17cdf4e213cf686d417ccdcf29ef1fa36c053e4a25d6e4ca8934ea-merged.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.001261204Z" level=info msg="Removed container 8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd: openshift-multus/multus-additional-cni-plugins-p9nj2/cni-plugins" id=df2060e0-f413-46c2-93d9-c7bec3ac3ce0 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.001496 2112 scope.go:115] "RemoveContainer" containerID="0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510" Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.002273478Z" level=info msg="Removing container: 0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510" id=718021da-b11a-44d2-82a1-822325d684c7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d0ade740c3e278340eb941412e1f81322bb340dfe2e5ccdeaa354f677d07f29b-merged.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d0ade740c3e278340eb941412e1f81322bb340dfe2e5ccdeaa354f677d07f29b-merged.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038012 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038063 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038094 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038127 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-22vqh\" (UniqueName: \"kubernetes.io/projected/7f25c5a9-b9c7-4220-a892-362cf6b33878-kube-api-access-22vqh\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038168 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-system-cni-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038201 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-os-release\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038240 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-cnibin\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038311 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-cnibin\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038853 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038917 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-system-cni-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.038973 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-os-release\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.039840 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.040142 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.054868 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-22vqh\" (UniqueName: \"kubernetes.io/projected/7f25c5a9-b9c7-4220-a892-362cf6b33878-kube-api-access-22vqh\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.065605903Z" level=info msg="Removed container 0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510: openshift-multus/multus-additional-cni-plugins-p9nj2/egress-router-binary-copy" id=718021da-b11a-44d2-82a1-822325d684c7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.065858 2112 scope.go:115] "RemoveContainer" containerID="638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:24.066144 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5\": container with ID starting with 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5 not found: ID does not exist" containerID="638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.066187 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5} err="failed to get container status \"638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5\": rpc error: code = NotFound desc = could not find container \"638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5\": container with ID starting with 638ba010408e61c882c6347eee020efdaa9faa54cd24013e798cc9389080ecc5 not found: ID does not exist" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.066202 2112 scope.go:115] "RemoveContainer" containerID="70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:24.066482 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16\": container with ID starting with 70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16 not found: ID does not exist" containerID="70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.066511 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16} err="failed to get container status \"70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16\": rpc error: code = NotFound desc = could not find container \"70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16\": container with ID starting with 70f2876e293edd1469e1953225a9c5599e6f1cf5214954150ed61bf2835faa16 not found: ID does not exist" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.066524 2112 scope.go:115] "RemoveContainer" containerID="535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:24.066812 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038\": container with ID starting with 535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038 not found: ID does not exist" containerID="535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.066844 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038} err="failed to get container status \"535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038\": rpc error: code = NotFound desc = could not find container \"535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038\": container with ID starting with 535c93e83c702180e094716412b833a54504f2862ef061dfa60cb17eceda8038 not found: ID does not exist" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.066855 2112 scope.go:115] "RemoveContainer" containerID="8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:24.067035 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122\": container with ID starting with 8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122 not found: ID does not exist" containerID="8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.067061 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122} err="failed to get container status \"8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122\": rpc error: code = NotFound desc = could not find container \"8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122\": container with ID starting with 8602029ef780cd32039d3805efe88e1356b52419b59776806ace591393868122 not found: ID does not exist" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.067072 2112 scope.go:115] "RemoveContainer" containerID="71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:24.067259 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2\": container with ID starting with 71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2 not found: ID does not exist" containerID="71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.067285 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2} err="failed to get container status \"71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2\": rpc error: code = NotFound desc = could not find container \"71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2\": container with ID starting with 71fc3ba23434903f12c0286d43d433518eebf6cfc55b5fcf190554dcffb955c2 not found: ID does not exist" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.067294 2112 scope.go:115] "RemoveContainer" containerID="8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:24.067474 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd\": container with ID starting with 8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd not found: ID does not exist" containerID="8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.067496 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd} err="failed to get container status \"8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd\": rpc error: code = NotFound desc = could not find container \"8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd\": container with ID starting with 8bd18f920acedc27a0352dcf5bb627f747f8a8ed9f42d3d2e91c9a2ee722bccd not found: ID does not exist" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.067503 2112 scope.go:115] "RemoveContainer" containerID="0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:24.067633 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510\": container with ID starting with 0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510 not found: ID does not exist" containerID="0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.067651 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510} err="failed to get container status \"0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510\": rpc error: code = NotFound desc = could not find container \"0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510\": container with ID starting with 0872c6395518f918bc296d83c99450bc8bb562af7971e28d53bff64a2d598510 not found: ID does not exist" Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: crio-264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a.scope: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: crio-264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a.scope: Consumed 627ms CPU time Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: crio-conmon-264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a.scope: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: crio-conmon-264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a.scope: Consumed 28ms CPU time Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.118699 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.119118475Z" level=info msg="Running pod sandbox: openshift-multus/multus-additional-cni-plugins-nqwsg/POD" id=dfc6c96e-0cf4-4f42-909e-7da2b70adfc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.119175371Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.121135 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2c47bc3e-0247-4d47-80e3-c168262e7976 path="/var/lib/kubelet/pods/2c47bc3e-0247-4d47-80e3-c168262e7976/volumes" Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.159790384Z" level=info msg="Stopped container 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a: openshift-multus/network-metrics-daemon-5hc5d/kube-rbac-proxy" id=7b68d147-bc5a-494e-a583-df2f21c5f172 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.160141498Z" level=info msg="Stopping pod sandbox: 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4" id=8159a0ae-85c6-4c52-9c0a-34d6dcad3a72 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.160335112Z" level=info msg="Got pod network &{Name:network-metrics-daemon-5hc5d Namespace:openshift-multus ID:5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4 UID:9cd26ba5-46e4-40b5-81e6-74079153d58d NetNS:/var/run/netns/199a84df-cf71-443f-81b3-b9ff5e18be9d Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.160451121Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-5hc5d from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.165592280Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=dfc6c96e-0cf4-4f42-909e-7da2b70adfc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:20:24.168814 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f25c5a9_b9c7_4220_a892_362cf6b33878.slice/crio-5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948.scope WatchSource:0}: Error finding container 5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948: Status 404 returned error can't find the container with id 5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.171442547Z" level=info msg="Ran pod sandbox 5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 with infra container: openshift-multus/multus-additional-cni-plugins-nqwsg/POD" id=dfc6c96e-0cf4-4f42-909e-7da2b70adfc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.172727491Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738" id=945ce79d-34c3-47b9-b664-b16414fa94f8 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.172916248Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738 not found" id=945ce79d-34c3-47b9-b664-b16414fa94f8 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.173644533Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738" id=b597415d-4bd3-4add-955b-04119e30f31e name=/runtime.v1.ImageService/PullImage Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.174993847Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738\"" Feb 23 17:20:24 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00369|bridge|INFO|bridge br-int: deleted interface 5fbd19a020a56df on port 8 Feb 23 17:20:24 ip-10-0-136-68 kernel: device 5fbd19a020a56df left promiscuous mode Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: 2023-02-23T17:20:24Z [verbose] Del: openshift-multus:network-metrics-daemon-5hc5d:9cd26ba5-46e4-40b5-81e6-74079153d58d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: I0223 17:20:24.302193 65932 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.753481 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerStarted Data:5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948} Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.754810 2112 generic.go:296] "Generic (PLEG): container finished" podID=9cd26ba5-46e4-40b5-81e6-74079153d58d containerID="264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a" exitCode=0 Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.754842 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerDied Data:264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a} Feb 23 17:20:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:24.821748140Z" level=info msg="Stopped pod sandbox: 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4" id=8159a0ae-85c6-4c52-9c0a-34d6dcad3a72 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e4ec726041aa39217379f50a62638a49e81169bc42849e6222442439ff740d57-merged.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e4ec726041aa39217379f50a62638a49e81169bc42849e6222442439ff740d57-merged.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-902c55c796e0c616a06b919968f15bdd9971fb1aaab702bc1ad4353e4210b432-merged.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-902c55c796e0c616a06b919968f15bdd9971fb1aaab702bc1ad4353e4210b432-merged.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: run-netns-199a84df\x2dcf71\x2d443f\x2d81b3\x2db9ff5e18be9d.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: run-netns-199a84df\x2dcf71\x2d443f\x2d81b3\x2db9ff5e18be9d.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: run-ipcns-199a84df\x2dcf71\x2d443f\x2d81b3\x2db9ff5e18be9d.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: run-ipcns-199a84df\x2dcf71\x2d443f\x2d81b3\x2db9ff5e18be9d.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: run-utsns-199a84df\x2dcf71\x2d443f\x2d81b3\x2db9ff5e18be9d.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: run-utsns-199a84df\x2dcf71\x2d443f\x2d81b3\x2db9ff5e18be9d.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4-userdata-shm.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.944712 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jwlz\" (UniqueName: \"kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz\") pod \"9cd26ba5-46e4-40b5-81e6-74079153d58d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.944786 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") pod \"9cd26ba5-46e4-40b5-81e6-74079153d58d\" (UID: \"9cd26ba5-46e4-40b5-81e6-74079153d58d\") " Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9cd26ba5\x2d46e4\x2d40b5\x2d81e6\x2d74079153d58d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jwlz.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9cd26ba5\x2d46e4\x2d40b5\x2d81e6\x2d74079153d58d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jwlz.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9cd26ba5\x2d46e4\x2d40b5\x2d81e6\x2d74079153d58d-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Succeeded. Feb 23 17:20:24 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9cd26ba5\x2d46e4\x2d40b5\x2d81e6\x2d74079153d58d-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Consumed 0 CPU time Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.958857 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz" (OuterVolumeSpecName: "kube-api-access-2jwlz") pod "9cd26ba5-46e4-40b5-81e6-74079153d58d" (UID: "9cd26ba5-46e4-40b5-81e6-74079153d58d"). InnerVolumeSpecName "kube-api-access-2jwlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:20:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:24.958869 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "9cd26ba5-46e4-40b5-81e6-74079153d58d" (UID: "9cd26ba5-46e4-40b5-81e6-74079153d58d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.045293 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-2jwlz\" (UniqueName: \"kubernetes.io/projected/9cd26ba5-46e4-40b5-81e6-74079153d58d-kube-api-access-2jwlz\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.045329 2112 reconciler.go:399] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cd26ba5-46e4-40b5-81e6-74079153d58d-metrics-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:25.649962516Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738\"" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.758380 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hc5d" event=&{ID:9cd26ba5-46e4-40b5-81e6-74079153d58d Type:ContainerDied Data:5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4} Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.758421 2112 scope.go:115] "RemoveContainer" containerID="264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a" Feb 23 17:20:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:25.759281454Z" level=info msg="Removing container: 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a" id=3f574d66-1d75-41af-8049-724f6ede4e04 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:25 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod9cd26ba5_46e4_40b5_81e6_74079153d58d.slice. Feb 23 17:20:25 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod9cd26ba5_46e4_40b5_81e6_74079153d58d.slice: Consumed 1.982s CPU time Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.780728 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/network-metrics-daemon-5hc5d] Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.787295 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/network-metrics-daemon-5hc5d] Feb 23 17:20:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:25.790004476Z" level=info msg="Removed container 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a: openshift-multus/network-metrics-daemon-5hc5d/kube-rbac-proxy" id=3f574d66-1d75-41af-8049-724f6ede4e04 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.790284 2112 scope.go:115] "RemoveContainer" containerID="04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4" Feb 23 17:20:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:25.791062914Z" level=info msg="Removing container: 04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4" id=8c9221e8-22e9-4027-885b-652f667ba402 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.801445 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/network-metrics-daemon-bs7jz] Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.801494 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:25.801572 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cd26ba5-46e4-40b5-81e6-74079153d58d" containerName="kube-rbac-proxy" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.801583 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd26ba5-46e4-40b5-81e6-74079153d58d" containerName="kube-rbac-proxy" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:25.801596 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cd26ba5-46e4-40b5-81e6-74079153d58d" containerName="network-metrics-daemon" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.801603 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd26ba5-46e4-40b5-81e6-74079153d58d" containerName="network-metrics-daemon" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.801928 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="9cd26ba5-46e4-40b5-81e6-74079153d58d" containerName="kube-rbac-proxy" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.801953 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="9cd26ba5-46e4-40b5-81e6-74079153d58d" containerName="network-metrics-daemon" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.810273 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/network-metrics-daemon-bs7jz] Feb 23 17:20:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:25.810714844Z" level=info msg="Removed container 04fe0d7cc7bdec48d56fb54f5485c0f26a6d06783320171b8f23144e242095d4: openshift-multus/network-metrics-daemon-5hc5d/network-metrics-daemon" id=8c9221e8-22e9-4027-885b-652f667ba402 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:25 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod93f0c5c3_9f22_4b93_a925_f621ed5e18e7.slice. Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.951096 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgsp8\" (UniqueName: \"kubernetes.io/projected/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-kube-api-access-mgsp8\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:20:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:25.951161 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-metrics-certs\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.051465 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-mgsp8\" (UniqueName: \"kubernetes.io/projected/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-kube-api-access-mgsp8\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.051528 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-metrics-certs\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.053886 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-metrics-certs\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.066766 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgsp8\" (UniqueName: \"kubernetes.io/projected/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-kube-api-access-mgsp8\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.117792 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.118211 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-multus/network-metrics-daemon-5hc5d" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d containerName="kube-rbac-proxy" containerID="cri-o://264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a" gracePeriod=1 Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.118319289Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=e4802fbf-5058-484c-bf32-8cf74a8dc83d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.118389869Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.118699880Z" level=info msg="Stopping container: 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a (timeout: 1s)" id=59c966ba-9dae-4d13-967e-b0abbfec9538 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:26.118854 2112 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a\": container with ID starting with 264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a not found: ID does not exist" containerID="264e58a1f4919a17479220f33c03324c89f2a710dbd87a3b3a01852696d2d53a" Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.118986706Z" level=info msg="Stopping pod sandbox: 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4" id=340bfdd2-150b-4bce-aa0b-810149489f0a name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.119023190Z" level=info msg="Stopped pod sandbox (already stopped): 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4" id=340bfdd2-150b-4bce-aa0b-810149489f0a name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.119719 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9cd26ba5-46e4-40b5-81e6-74079153d58d path="/var/lib/kubelet/pods/9cd26ba5-46e4-40b5-81e6-74079153d58d/volumes" Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.142505666Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/b7e04edc-986e-48bf-8822-18763de96831 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.142534247Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:20:26 ip-10-0-136-68 systemd-udevd[66018]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:20:26 ip-10-0-136-68 systemd-udevd[66018]: Could not generate persistent MAC address for e35d890abd5d4b0: No such file or directory Feb 23 17:20:26 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): e35d890abd5d4b0: link is not ready Feb 23 17:20:26 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:20:26 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:20:26 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): e35d890abd5d4b0: link becomes ready Feb 23 17:20:26 ip-10-0-136-68 NetworkManager[1147]: [1677172826.2948] device (e35d890abd5d4b0): carrier: link connected Feb 23 17:20:26 ip-10-0-136-68 NetworkManager[1147]: [1677172826.2952] manager: (e35d890abd5d4b0): new Veth device (/org/freedesktop/NetworkManager/Devices/72) Feb 23 17:20:26 ip-10-0-136-68 kernel: device e35d890abd5d4b0 entered promiscuous mode Feb 23 17:20:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00370|bridge|INFO|bridge br-int: added interface e35d890abd5d4b0 on port 31 Feb 23 17:20:26 ip-10-0-136-68 NetworkManager[1147]: [1677172826.3181] manager: (e35d890abd5d4b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73) Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.387653 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-multus/network-metrics-daemon-bs7jz] Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: I0223 17:20:26.273080 66008 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: 2023-02-23T17:20:26Z [verbose] Add: openshift-multus:network-metrics-daemon-bs7jz:93f0c5c3-9f22-4b93-a925-f621ed5e18e7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e35d890abd5d4b0","mac":"e2:bf:da:99:84:c9"},{"name":"eth0","mac":"0a:58:0a:81:02:21","sandbox":"/var/run/netns/b7e04edc-986e-48bf-8822-18763de96831"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.33/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: I0223 17:20:26.369964 66001 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-bs7jz", UID:"93f0c5c3-9f22-4b93-a925-f621ed5e18e7", APIVersion:"v1", ResourceVersion:"74602", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.33/23] from ovn-kubernetes Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.388429993Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/b7e04edc-986e-48bf-8822-18763de96831 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.388551777Z" level=info msg="Checking pod openshift-multus_network-metrics-daemon-bs7jz for CNI network multus-cni-network (type=multus)" Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:20:26.390468 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93f0c5c3_9f22_4b93_a925_f621ed5e18e7.slice/crio-e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8.scope WatchSource:0}: Error finding container e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8: Status 404 returned error can't find the container with id e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8 Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.392461674Z" level=info msg="Ran pod sandbox e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8 with infra container: openshift-multus/network-metrics-daemon-bs7jz/POD" id=e4802fbf-5058-484c-bf32-8cf74a8dc83d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.393190398Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7088b3c347e81aa9da9dc416109ada43e35ed4ae50038a02be2c7edae6d194a2" id=ba3342ef-8e01-43ff-9c53-2bb0ea8794d1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.393353598Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7088b3c347e81aa9da9dc416109ada43e35ed4ae50038a02be2c7edae6d194a2 not found" id=ba3342ef-8e01-43ff-9c53-2bb0ea8794d1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.393900184Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7088b3c347e81aa9da9dc416109ada43e35ed4ae50038a02be2c7edae6d194a2" id=b2c72763-ee67-487f-af27-5b5a84213009 name=/runtime.v1.ImageService/PullImage Feb 23 17:20:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:26.394742215Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7088b3c347e81aa9da9dc416109ada43e35ed4ae50038a02be2c7edae6d194a2\"" Feb 23 17:20:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:26.761314 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bs7jz" event=&{ID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Type:ContainerStarted Data:e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8} Feb 23 17:20:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:27.226122711Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7088b3c347e81aa9da9dc416109ada43e35ed4ae50038a02be2c7edae6d194a2\"" Feb 23 17:20:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:32.608960 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w] Feb 23 17:20:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:32.609006 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:20:32 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-besteffort-podfb8cddd7_8398_4edb_b1cc_362df7469281.slice. Feb 23 17:20:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:32.803684 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5tc\" (UniqueName: \"kubernetes.io/projected/fb8cddd7-8398-4edb-b1cc-362df7469281-kube-api-access-cd5tc\") pod \"ovnkube-upgrades-prepuller-nzb8w\" (UID: \"fb8cddd7-8398-4edb-b1cc-362df7469281\") " pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" Feb 23 17:20:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:32.903999 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-cd5tc\" (UniqueName: \"kubernetes.io/projected/fb8cddd7-8398-4edb-b1cc-362df7469281-kube-api-access-cd5tc\") pod \"ovnkube-upgrades-prepuller-nzb8w\" (UID: \"fb8cddd7-8398-4edb-b1cc-362df7469281\") " pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" Feb 23 17:20:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:32.930877 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd5tc\" (UniqueName: \"kubernetes.io/projected/fb8cddd7-8398-4edb-b1cc-362df7469281-kube-api-access-cd5tc\") pod \"ovnkube-upgrades-prepuller-nzb8w\" (UID: \"fb8cddd7-8398-4edb-b1cc-362df7469281\") " pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" Feb 23 17:20:33 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:33.222383 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" Feb 23 17:20:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:33.222866469Z" level=info msg="Running pod sandbox: openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w/POD" id=f8de9cf8-fe8a-4774-a030-6390af8535b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:33.222913786Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:33.242136926Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=f8de9cf8-fe8a-4774-a030-6390af8535b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:33.246607547Z" level=info msg="Ran pod sandbox 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d with infra container: openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w/POD" id=f8de9cf8-fe8a-4774-a030-6390af8535b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:33 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:20:33.246680 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb8cddd7_8398_4edb_b1cc_362df7469281.slice/crio-8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d.scope WatchSource:0}: Error finding container 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d: Status 404 returned error can't find the container with id 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d Feb 23 17:20:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:33.247385257Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=04909483-d6bf-4ea2-a588-25b08524c32e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:33.247526984Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a not found" id=04909483-d6bf-4ea2-a588-25b08524c32e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:33.248044085Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=6f59ba5d-b5e5-461b-94e6-dd67aeb5db5f name=/runtime.v1.ImageService/PullImage Feb 23 17:20:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:33.249077493Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a\"" Feb 23 17:20:33 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:33.838504 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" event=&{ID:fb8cddd7-8398-4edb-b1cc-362df7469281 Type:ContainerStarted Data:8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d} Feb 23 17:20:34 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:34.137034980Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a\"" Feb 23 17:20:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:34.188751 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-network-diagnostics/network-check-source-6d479699bc-cppvx] Feb 23 17:20:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:34.189425 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" podUID=1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 containerName="check-endpoints" containerID="cri-o://7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524" gracePeriod=30 Feb 23 17:20:34 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:34.189810418Z" level=info msg="Stopping container: 7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524 (timeout: 30s)" id=9a93d58d-56ef-4ca4-b748-061e6f33f9df name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:34 ip-10-0-136-68 systemd[1]: crio-7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524.scope: Succeeded. Feb 23 17:20:34 ip-10-0-136-68 systemd[1]: crio-7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524.scope: Consumed 5.826s CPU time Feb 23 17:20:34 ip-10-0-136-68 systemd[1]: crio-conmon-7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524.scope: Succeeded. Feb 23 17:20:34 ip-10-0-136-68 systemd[1]: crio-conmon-7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524.scope: Consumed 26ms CPU time Feb 23 17:20:34 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ec33a879049aa43ff5a352d90f3e44755c06e6b75586161c06610f3f2b20c38b-merged.mount: Succeeded. Feb 23 17:20:34 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ec33a879049aa43ff5a352d90f3e44755c06e6b75586161c06610f3f2b20c38b-merged.mount: Consumed 0 CPU time Feb 23 17:20:34 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:34.317041432Z" level=info msg="Stopped container 7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524: openshift-network-diagnostics/network-check-source-6d479699bc-cppvx/check-endpoints" id=9a93d58d-56ef-4ca4-b748-061e6f33f9df name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:34 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:34.318005240Z" level=info msg="Stopping pod sandbox: a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7" id=3fee31c1-88a1-481c-b866-1097ce53c43b name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:34 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:34.318246239Z" level=info msg="Got pod network &{Name:network-check-source-6d479699bc-cppvx Namespace:openshift-network-diagnostics ID:a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7 UID:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 NetNS:/var/run/netns/97c36b59-09a3-45a1-8282-495805886245 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:20:34 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:34.318419125Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-source-6d479699bc-cppvx from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:20:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00371|bridge|INFO|bridge br-int: deleted interface a86108175c50cc4 on port 12 Feb 23 17:20:34 ip-10-0-136-68 kernel: device a86108175c50cc4 left promiscuous mode Feb 23 17:20:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:34.843162 2112 generic.go:296] "Generic (PLEG): container finished" podID=1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 containerID="7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524" exitCode=0 Feb 23 17:20:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:34.843200 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" event=&{ID:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 Type:ContainerDied Data:7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524} Feb 23 17:20:35 ip-10-0-136-68 crio[2062]: 2023-02-23T17:20:34Z [verbose] Del: openshift-network-diagnostics:network-check-source-6d479699bc-cppvx:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:20:35 ip-10-0-136-68 crio[2062]: I0223 17:20:34.564478 66139 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-896fcc94f6d726f3255a5eaceaf079767dbd670288072d205f69704fa9e24ef0-merged.mount: Succeeded. Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-896fcc94f6d726f3255a5eaceaf079767dbd670288072d205f69704fa9e24ef0-merged.mount: Consumed 0 CPU time Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: run-utsns-97c36b59\x2d09a3\x2d45a1\x2d8282\x2d495805886245.mount: Succeeded. Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: run-utsns-97c36b59\x2d09a3\x2d45a1\x2d8282\x2d495805886245.mount: Consumed 0 CPU time Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: run-ipcns-97c36b59\x2d09a3\x2d45a1\x2d8282\x2d495805886245.mount: Succeeded. Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: run-ipcns-97c36b59\x2d09a3\x2d45a1\x2d8282\x2d495805886245.mount: Consumed 0 CPU time Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: run-netns-97c36b59\x2d09a3\x2d45a1\x2d8282\x2d495805886245.mount: Succeeded. Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: run-netns-97c36b59\x2d09a3\x2d45a1\x2d8282\x2d495805886245.mount: Consumed 0 CPU time Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7-userdata-shm.mount: Succeeded. Feb 23 17:20:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:20:35 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:35.849727263Z" level=info msg="Stopped pod sandbox: a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7" id=3fee31c1-88a1-481c-b866-1097ce53c43b name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.030641 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl9v2\" (UniqueName: \"kubernetes.io/projected/1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6-kube-api-access-jl9v2\") pod \"1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6\" (UID: \"1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6\") " Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.043926 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6-kube-api-access-jl9v2" (OuterVolumeSpecName: "kube-api-access-jl9v2") pod "1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6" (UID: "1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6"). InnerVolumeSpecName "kube-api-access-jl9v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:20:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00372|connmgr|INFO|br-ex<->unix#1345: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:20:36 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod1c1afc56_9a13_4fc0_ac79_ec4ea0ebccb6.slice. Feb 23 17:20:36 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod1c1afc56_9a13_4fc0_ac79_ec4ea0ebccb6.slice: Consumed 5.852s CPU time Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.131473 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-jl9v2\" (UniqueName: \"kubernetes.io/projected/1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6-kube-api-access-jl9v2\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:36 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1c1afc56\x2d9a13\x2d4fc0\x2dac79\x2dec4ea0ebccb6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djl9v2.mount: Succeeded. Feb 23 17:20:36 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-1c1afc56\x2d9a13\x2d4fc0\x2dac79\x2dec4ea0ebccb6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djl9v2.mount: Consumed 0 CPU time Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.848418 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6d479699bc-cppvx" event=&{ID:1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 Type:ContainerDied Data:a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7} Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.848463 2112 scope.go:115] "RemoveContainer" containerID="7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524" Feb 23 17:20:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:36.849404525Z" level=info msg="Removing container: 7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524" id=50ea1329-425e-4fa6-97ee-57e46c26e5ba name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.871038 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-network-diagnostics/network-check-source-6d479699bc-cppvx] Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.875439 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-network-diagnostics/network-check-source-6d479699bc-cppvx] Feb 23 17:20:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:36.881462931Z" level=info msg="Removed container 7bd5af2900c8981205156351511fd4648b43a293f16534625faca106d6ae5524: openshift-network-diagnostics/network-check-source-6d479699bc-cppvx/check-endpoints" id=50ea1329-425e-4fa6-97ee-57e46c26e5ba name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:36.883240097Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738" id=b597415d-4bd3-4add-955b-04119e30f31e name=/runtime.v1.ImageService/PullImage Feb 23 17:20:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:36.883802603Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738" id=cebc9fbd-1e12-4a25-bc4c-1a9f73aea9ff name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:36.884826619Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:62193d6a7bd5f13f6274858bc3a171ed936272ebc5eb1116b65ceeae936c136b,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738],Size_:470168569,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=cebc9fbd-1e12-4a25-bc4c-1a9f73aea9ff name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:36.885378179Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-nqwsg/egress-router-binary-copy" id=c2792413-42ac-40aa-a6c3-cb835317bad5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:36 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:36.885456432Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.907107 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr] Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.907153 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:36.907223 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6" containerName="check-endpoints" Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.907234 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6" containerName="check-endpoints" Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.907296 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6" containerName="check-endpoints" Feb 23 17:20:36 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod7952f7cd_30fa_4974_9514_90e64fd0405a.slice. Feb 23 17:20:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:36.966027 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr] Feb 23 17:20:36 ip-10-0-136-68 systemd[1]: Started crio-conmon-d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7.scope. Feb 23 17:20:37 ip-10-0-136-68 systemd[1]: Started libcontainer container d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7. Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:37.036974 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295pj\" (UniqueName: \"kubernetes.io/projected/7952f7cd-30fa-4974-9514-90e64fd0405a-kube-api-access-295pj\") pod \"network-check-source-5ff44f4c57-4nhbr\" (UID: \"7952f7cd-30fa-4974-9514-90e64fd0405a\") " pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.084369597Z" level=info msg="Created container d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7: openshift-multus/multus-additional-cni-plugins-nqwsg/egress-router-binary-copy" id=c2792413-42ac-40aa-a6c3-cb835317bad5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.084840464Z" level=info msg="Starting container: d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7" id=63545a4d-c282-4d30-9808-559e9050e7d2 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.091462563Z" level=info msg="Started container" PID=66250 containerID=d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7 description=openshift-multus/multus-additional-cni-plugins-nqwsg/egress-router-binary-copy id=63545a4d-c282-4d30-9808-559e9050e7d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.096523868Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_72686df3-549e-41b3-a1f2-9e595782ead8\"" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.112492729Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.112516751Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.115892683Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/egress-router\"" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.124582534Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.124603854Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.124618896Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_72686df3-549e-41b3-a1f2-9e595782ead8\"" Feb 23 17:20:37 ip-10-0-136-68 systemd[1]: crio-d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7.scope: Succeeded. Feb 23 17:20:37 ip-10-0-136-68 systemd[1]: crio-d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7.scope: Consumed 48ms CPU time Feb 23 17:20:37 ip-10-0-136-68 systemd[1]: crio-conmon-d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7.scope: Succeeded. Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:37.137522 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-295pj\" (UniqueName: \"kubernetes.io/projected/7952f7cd-30fa-4974-9514-90e64fd0405a-kube-api-access-295pj\") pod \"network-check-source-5ff44f4c57-4nhbr\" (UID: \"7952f7cd-30fa-4974-9514-90e64fd0405a\") " pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" Feb 23 17:20:37 ip-10-0-136-68 systemd[1]: crio-conmon-d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7.scope: Consumed 25ms CPU time Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:37.152009 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-295pj\" (UniqueName: \"kubernetes.io/projected/7952f7cd-30fa-4974-9514-90e64fd0405a-kube-api-access-295pj\") pod \"network-check-source-5ff44f4c57-4nhbr\" (UID: \"7952f7cd-30fa-4974-9514-90e64fd0405a\") " pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:37.230396 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.230875587Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr/POD" id=84d18d88-f223-4b1a-85da-f5052da2b5ee name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.230935047Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.257798559Z" level=info msg="Got pod network &{Name:network-check-source-5ff44f4c57-4nhbr Namespace:openshift-network-diagnostics ID:d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33 UID:7952f7cd-30fa-4974-9514-90e64fd0405a NetNS:/var/run/netns/b3efef6f-5f50-4c26-a722-fd874ef1762a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.257821731Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-source-5ff44f4c57-4nhbr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:20:37 ip-10-0-136-68 systemd-udevd[66331]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:20:37 ip-10-0-136-68 systemd-udevd[66331]: Could not generate persistent MAC address for d3697aea95661cd: No such file or directory Feb 23 17:20:37 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): d3697aea95661cd: link is not ready Feb 23 17:20:37 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:20:37 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:20:37 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): d3697aea95661cd: link becomes ready Feb 23 17:20:37 ip-10-0-136-68 NetworkManager[1147]: [1677172837.4125] device (d3697aea95661cd): carrier: link connected Feb 23 17:20:37 ip-10-0-136-68 NetworkManager[1147]: [1677172837.4129] manager: (d3697aea95661cd): new Veth device (/org/freedesktop/NetworkManager/Devices/74) Feb 23 17:20:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00373|bridge|INFO|bridge br-int: added interface d3697aea95661cd on port 32 Feb 23 17:20:37 ip-10-0-136-68 NetworkManager[1147]: [1677172837.4375] manager: (d3697aea95661cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75) Feb 23 17:20:37 ip-10-0-136-68 kernel: device d3697aea95661cd entered promiscuous mode Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:37.512187 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr] Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: I0223 17:20:37.387900 66321 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: 2023-02-23T17:20:37Z [verbose] Add: openshift-network-diagnostics:network-check-source-5ff44f4c57-4nhbr:7952f7cd-30fa-4974-9514-90e64fd0405a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d3697aea95661cd","mac":"72:f4:ec:8a:6a:e7"},{"name":"eth0","mac":"0a:58:0a:81:02:22","sandbox":"/var/run/netns/b3efef6f-5f50-4c26-a722-fd874ef1762a"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.34/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: I0223 17:20:37.483505 66314 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-source-5ff44f4c57-4nhbr", UID:"7952f7cd-30fa-4974-9514-90e64fd0405a", APIVersion:"v1", ResourceVersion:"74814", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.34/23] from ovn-kubernetes Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.514009019Z" level=info msg="Got pod network &{Name:network-check-source-5ff44f4c57-4nhbr Namespace:openshift-network-diagnostics ID:d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33 UID:7952f7cd-30fa-4974-9514-90e64fd0405a NetNS:/var/run/netns/b3efef6f-5f50-4c26-a722-fd874ef1762a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.514151860Z" level=info msg="Checking pod openshift-network-diagnostics_network-check-source-5ff44f4c57-4nhbr for CNI network multus-cni-network (type=multus)" Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:20:37.516027 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7952f7cd_30fa_4974_9514_90e64fd0405a.slice/crio-d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33.scope WatchSource:0}: Error finding container d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33: Status 404 returned error can't find the container with id d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33 Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.517867746Z" level=info msg="Ran pod sandbox d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33 with infra container: openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr/POD" id=84d18d88-f223-4b1a-85da-f5052da2b5ee name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.518575597Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d" id=a798026a-a0af-4197-894b-6c8b1355efb8 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.518775532Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d not found" id=a798026a-a0af-4197-894b-6c8b1355efb8 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.519264262Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d" id=4c112dc2-41d7-4a55-b6bc-c0c8f14aa989 name=/runtime.v1.ImageService/PullImage Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.520062810Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d\"" Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:37.850628 2112 generic.go:296] "Generic (PLEG): container finished" podID=7f25c5a9-b9c7-4220-a892-362cf6b33878 containerID="d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7" exitCode=0 Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:37.850708 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerDied Data:d1b297dd1f1a6782e55a21b4092f43cbde8ed4cd1e6ad6f08e33d1233df2dfe7} Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.854757320Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:4e4bafb1e5f5b396d308ee8a7e49c4fd8b13597134c5887db0dd9d9c2d4e0e40" id=039c8c07-843f-453c-82d0-73ccc9b46532 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.854965578Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:4e4bafb1e5f5b396d308ee8a7e49c4fd8b13597134c5887db0dd9d9c2d4e0e40 not found" id=039c8c07-843f-453c-82d0-73ccc9b46532 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.858288099Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:4e4bafb1e5f5b396d308ee8a7e49c4fd8b13597134c5887db0dd9d9c2d4e0e40" id=237ee7e1-fb35-4ef5-9ac9-04df70c27694 name=/runtime.v1.ImageService/PullImage Feb 23 17:20:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:37.859128536Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:4e4bafb1e5f5b396d308ee8a7e49c4fd8b13597134c5887db0dd9d9c2d4e0e40\"" Feb 23 17:20:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:37.859213 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" event=&{ID:7952f7cd-30fa-4974-9514-90e64fd0405a Type:ContainerStarted Data:d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33} Feb 23 17:20:38 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:38.119485 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6 path="/var/lib/kubelet/pods/1c1afc56-9a13-4fc0-ac79-ec4ea0ebccb6/volumes" Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.016446574Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7088b3c347e81aa9da9dc416109ada43e35ed4ae50038a02be2c7edae6d194a2" id=b2c72763-ee67-487f-af27-5b5a84213009 name=/runtime.v1.ImageService/PullImage Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.017143323Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7088b3c347e81aa9da9dc416109ada43e35ed4ae50038a02be2c7edae6d194a2" id=3b0313fe-60e4-417f-b5bc-db55d4075169 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.018329177Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8dcd8e6511b51d17414763f5c791f8df2916740cb6be27a562eaae54bcc7d4bc,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:7088b3c347e81aa9da9dc416109ada43e35ed4ae50038a02be2c7edae6d194a2],Size_:387021117,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=3b0313fe-60e4-417f-b5bc-db55d4075169 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.018938210Z" level=info msg="Creating container: openshift-multus/network-metrics-daemon-bs7jz/network-metrics-daemon" id=05c9f00c-e5c9-4f7f-a3d1-2398ed8492b0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.019011726Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:40 ip-10-0-136-68 systemd[1]: Started crio-conmon-e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d.scope. Feb 23 17:20:40 ip-10-0-136-68 systemd[1]: Started libcontainer container e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d. Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.267730887Z" level=info msg="Created container e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d: openshift-multus/network-metrics-daemon-bs7jz/network-metrics-daemon" id=05c9f00c-e5c9-4f7f-a3d1-2398ed8492b0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.268393329Z" level=info msg="Starting container: e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d" id=a8dd1064-b9c2-495b-8036-09345004a645 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.275388518Z" level=info msg="Started container" PID=66409 containerID=e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d description=openshift-multus/network-metrics-daemon-bs7jz/network-metrics-daemon id=a8dd1064-b9c2-495b-8036-09345004a645 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8 Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.286936378Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=6b2bf3f5-3646-4943-bc4f-34ed6ee2e23f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.287141437Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6b2bf3f5-3646-4943-bc4f-34ed6ee2e23f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.288920805Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=89cbf19a-730e-459a-94b7-a78de26b1153 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.289079985Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=89cbf19a-730e-459a-94b7-a78de26b1153 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.289741998Z" level=info msg="Creating container: openshift-multus/network-metrics-daemon-bs7jz/kube-rbac-proxy" id=07ade042-ea1f-497b-a514-f4922310e4c2 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.289837247Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:40 ip-10-0-136-68 systemd[1]: Started crio-conmon-fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb.scope. Feb 23 17:20:40 ip-10-0-136-68 systemd[1]: Started libcontainer container fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb. Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.464596496Z" level=info msg="Created container fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb: openshift-multus/network-metrics-daemon-bs7jz/kube-rbac-proxy" id=07ade042-ea1f-497b-a514-f4922310e4c2 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.465029154Z" level=info msg="Starting container: fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb" id=46d0a010-07c8-4bcf-994c-da918fc1f03a name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:20:40 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:40.472062652Z" level=info msg="Started container" PID=66458 containerID=fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb description=openshift-multus/network-metrics-daemon-bs7jz/kube-rbac-proxy id=46d0a010-07c8-4bcf-994c-da918fc1f03a name=/runtime.v1.RuntimeService/StartContainer sandboxID=e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8 Feb 23 17:20:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:40.868068 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bs7jz" event=&{ID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Type:ContainerStarted Data:fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb} Feb 23 17:20:40 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:40.868104 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bs7jz" event=&{ID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Type:ContainerStarted Data:e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d} Feb 23 17:20:41 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:41.076266125Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d\"" Feb 23 17:20:42 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:42.215521836Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:4e4bafb1e5f5b396d308ee8a7e49c4fd8b13597134c5887db0dd9d9c2d4e0e40\"" Feb 23 17:20:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:46.292130831Z" level=info msg="Stopping pod sandbox: 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962" id=3c8bbb3b-654d-4d4e-8aff-ba255bec7be5 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:46.292174367Z" level=info msg="Stopped pod sandbox (already stopped): 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962" id=3c8bbb3b-654d-4d4e-8aff-ba255bec7be5 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:46.297144209Z" level=info msg="Removing pod sandbox: 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962" id=53d793e3-8b38-4c66-bc43-300c9e863748 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:20:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:48.017302509Z" level=info msg="Removed pod sandbox: 9b5e394fefb22e13e7f5e1168da0a4549b32227605538c6a083eba02c051f962" id=53d793e3-8b38-4c66-bc43-300c9e863748 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:20:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:48.017932566Z" level=info msg="Stopping pod sandbox: 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4" id=b7483c0e-15a5-442e-bc6d-43496339a351 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:48.017961303Z" level=info msg="Stopped pod sandbox (already stopped): 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4" id=b7483c0e-15a5-442e-bc6d-43496339a351 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:48 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:48.018167795Z" level=info msg="Removing pod sandbox: 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4" id=55039dea-309e-4d0f-99ab-4d84aa04623f name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:20:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00374|connmgr|INFO|br-ex<->unix#1350: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.274791203Z" level=info msg="Removed pod sandbox: 5fbd19a020a56df6e9b45ea33c752205e9859f69f19803347be97dbf88e8a4f4" id=55039dea-309e-4d0f-99ab-4d84aa04623f name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.275377352Z" level=info msg="Stopping pod sandbox: a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7" id=dae9124d-f21d-4411-a50c-5bbc93d524cd name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.275405665Z" level=info msg="Stopped pod sandbox (already stopped): a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7" id=dae9124d-f21d-4411-a50c-5bbc93d524cd name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.275752094Z" level=info msg="Removing pod sandbox: a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7" id=c3c3befc-bbf3-4479-b1b3-12b57c9863b3 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.333086577Z" level=info msg="Removed pod sandbox: a86108175c50cc44162b88676ec8359013bb7b7016a59913656811bcd3c06bf7" id=c3c3befc-bbf3-4479-b1b3-12b57c9863b3 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.338613 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0\": container with ID starting with a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0 not found: ID does not exist" containerID="a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.338682 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0" err="rpc error: code = NotFound desc = could not find container \"a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0\": container with ID starting with a66fdc121b13f9f26ba68c3dbe58c3adde928f4b28ddbfa2c240ec341278edf0 not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.338975 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2\": container with ID starting with 95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2 not found: ID does not exist" containerID="95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.339005 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2" err="rpc error: code = NotFound desc = could not find container \"95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2\": container with ID starting with 95fe9d9951a757ba360cddabd5537cd812b718bf1041b1e98378ffc26547f2b2 not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.339270 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583\": container with ID starting with 901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583 not found: ID does not exist" containerID="901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.339295 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583" err="rpc error: code = NotFound desc = could not find container \"901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583\": container with ID starting with 901a63c3ed3ab8ea539d1972e9b14545869b9d69cc62bd62d8fa69b3762f3583 not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.339475 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2\": container with ID starting with 2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2 not found: ID does not exist" containerID="2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.339513 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2" err="rpc error: code = NotFound desc = could not find container \"2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2\": container with ID starting with 2c9afadfaefb33bc9960d887430757ab296b84b4f8f2e38a7d5a562d440648d2 not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.339945 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac\": container with ID starting with b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac not found: ID does not exist" containerID="b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.339976 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac" err="rpc error: code = NotFound desc = could not find container \"b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac\": container with ID starting with b8fa7c0a41b41ced332eb765dfad5ffb7f81c2e03e6cca2e914d6a89fb30a9ac not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.340925 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5\": container with ID starting with 9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5 not found: ID does not exist" containerID="9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.340956 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5" err="rpc error: code = NotFound desc = could not find container \"9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5\": container with ID starting with 9cf91d5d29cd6e9f0a99332a17bfada6f9dcff672372f98a35e100910a2ac7b5 not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.341262 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925\": container with ID starting with 474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925 not found: ID does not exist" containerID="474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.341323 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925" err="rpc error: code = NotFound desc = could not find container \"474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925\": container with ID starting with 474b4bf7722a950bdd74d5e6f36f8be012f5ee141657325ea4b4ee0cc4ae9925 not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.341766 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af\": container with ID starting with 4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af not found: ID does not exist" containerID="4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.341794 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af" err="rpc error: code = NotFound desc = could not find container \"4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af\": container with ID starting with 4a39f7e2bf3eb17fdd0d26fc1d6c124ca21c4a6c4547787c09eccfa411e099af not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:52.342170 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372\": container with ID starting with cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372 not found: ID does not exist" containerID="cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.342198 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372" err="rpc error: code = NotFound desc = could not find container \"cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372\": container with ID starting with cc8ac0b3221bee2c93bcd633a65f084396c7fcebdd0cc982a5e812aebc66f372 not found: ID does not exist" Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.381298055Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d" id=4c112dc2-41d7-4a55-b6bc-c0c8f14aa989 name=/runtime.v1.ImageService/PullImage Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.382046974Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d" id=9777e309-d59b-44c8-95cf-75ec239adb90 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.383823779Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3ce3915354bf7a03f692da38f95e1d2a6d3074e488a32c3b60aaee01fadc2994,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d],Size_:516222840,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=9777e309-d59b-44c8-95cf-75ec239adb90 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.384442114Z" level=info msg="Creating container: openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr/check-endpoints" id=9af0ac7e-08a8-49dc-b24d-8395f48d6a32 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.384535028Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e.scope. Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.447217295Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=6f59ba5d-b5e5-461b-94e6-dd67aeb5db5f name=/runtime.v1.ImageService/PullImage Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.448136319Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=97e9f05c-a68f-45bf-a13d-08168e6516d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.450559171Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=97e9f05c-a68f-45bf-a13d-08168e6516d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.451246514Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w/ovnkube-upgrades-prepuller" id=bcef9cc7-6d8a-4436-a188-7ff1ee5f4965 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.451334328Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.491050 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/multus-gr76d] Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.491239 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-multus/multus-gr76d" podUID=ffd2cee3-1bae-4941-8015-2b3ade383d85 containerName="kube-multus" containerID="cri-o://4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5" gracePeriod=10 Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.491628982Z" level=info msg="Stopping container: 4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5 (timeout: 10s)" id=b5945641-b04d-405c-a7cb-af78c7881aa3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000.scope. Feb 23 17:20:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e. Feb 23 17:20:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000. Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.608180307Z" level=info msg="Created container 13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e: openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr/check-endpoints" id=9af0ac7e-08a8-49dc-b24d-8395f48d6a32 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.609176310Z" level=info msg="Starting container: 13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e" id=e93b3ae0-884d-4844-9efa-827597588ce2 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.623344436Z" level=info msg="Created container 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000: openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w/ovnkube-upgrades-prepuller" id=bcef9cc7-6d8a-4436-a188-7ff1ee5f4965 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.623909206Z" level=info msg="Started container" PID=66752 containerID=13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e description=openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr/check-endpoints id=e93b3ae0-884d-4844-9efa-827597588ce2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33 Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.623991272Z" level=info msg="Starting container: 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000" id=66337d91-e2e6-4658-9c6a-1e999f583b3f name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:20:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:52.649587960Z" level=info msg="Started container" PID=66765 containerID=02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000 description=openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w/ovnkube-upgrades-prepuller id=66337d91-e2e6-4658-9c6a-1e999f583b3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.920967 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" event=&{ID:7952f7cd-30fa-4974-9514-90e64fd0405a Type:ContainerStarted Data:13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e} Feb 23 17:20:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:52.922515 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" event=&{ID:fb8cddd7-8398-4edb-b1cc-362df7469281 Type:ContainerStarted Data:02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000} Feb 23 17:20:52 ip-10-0-136-68 systemd[1]: crio-4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5.scope: Succeeded. Feb 23 17:20:52 ip-10-0-136-68 systemd[1]: crio-4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5.scope: Consumed 26.600s CPU time Feb 23 17:20:52 ip-10-0-136-68 systemd[1]: crio-conmon-4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5.scope: Succeeded. Feb 23 17:20:52 ip-10-0-136-68 systemd[1]: crio-conmon-4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5.scope: Consumed 26ms CPU time Feb 23 17:20:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:53.924636 2112 generic.go:296] "Generic (PLEG): container finished" podID=ffd2cee3-1bae-4941-8015-2b3ade383d85 containerID="4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5" exitCode=0 Feb 23 17:20:53 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:53.924714 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gr76d" event=&{ID:ffd2cee3-1bae-4941-8015-2b3ade383d85 Type:ContainerDied Data:4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5} Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7dfb2584abc99c7b618d79011d0af78bd31ce36c2a97b5dbc5f87f6b8072e713-merged.mount: Succeeded. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7dfb2584abc99c7b618d79011d0af78bd31ce36c2a97b5dbc5f87f6b8072e713-merged.mount: Consumed 0 CPU time Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.034694705Z" level=info msg="Stopped container 4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5: openshift-multus/multus-gr76d/kube-multus" id=b5945641-b04d-405c-a7cb-af78c7881aa3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.035267315Z" level=info msg="Stopping pod sandbox: cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6" id=8063fe6e-44e9-4d35-aa13-5e708b03cc1d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9b7c6c89c6bf8201424c1d0991c90cbe2bb1c28c395e4ee2743d8c1ee8b5dcf1-merged.mount: Succeeded. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9b7c6c89c6bf8201424c1d0991c90cbe2bb1c28c395e4ee2743d8c1ee8b5dcf1-merged.mount: Consumed 0 CPU time Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: run-utsns-f2dfc3e5\x2dedf9\x2d450e\x2d9e16\x2d2685a11be99d.mount: Succeeded. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: run-utsns-f2dfc3e5\x2dedf9\x2d450e\x2d9e16\x2d2685a11be99d.mount: Consumed 0 CPU time Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: run-ipcns-f2dfc3e5\x2dedf9\x2d450e\x2d9e16\x2d2685a11be99d.mount: Succeeded. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: run-ipcns-f2dfc3e5\x2dedf9\x2d450e\x2d9e16\x2d2685a11be99d.mount: Consumed 0 CPU time Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.084164732Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:4e4bafb1e5f5b396d308ee8a7e49c4fd8b13597134c5887db0dd9d9c2d4e0e40" id=237ee7e1-fb35-4ef5-9ac9-04df70c27694 name=/runtime.v1.ImageService/PullImage Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.084864922Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:4e4bafb1e5f5b396d308ee8a7e49c4fd8b13597134c5887db0dd9d9c2d4e0e40" id=d0ed7584-5842-4574-a555-7c28503dfad1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.086169294Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d0d966d4fd614348a70912e15c61a36516e517195727245dbb86ba66fecb2804,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:4e4bafb1e5f5b396d308ee8a7e49c4fd8b13597134c5887db0dd9d9c2d4e0e40],Size_:602617270,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d0ed7584-5842-4574-a555-7c28503dfad1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.086827003Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-nqwsg/cni-plugins" id=b268ff6d-d29a-4fbe-859b-3c120da55302 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.086907845Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: run-netns-f2dfc3e5\x2dedf9\x2d450e\x2d9e16\x2d2685a11be99d.mount: Succeeded. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: run-netns-f2dfc3e5\x2dedf9\x2d450e\x2d9e16\x2d2685a11be99d.mount: Consumed 0 CPU time Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.107710670Z" level=info msg="Stopped pod sandbox: cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6" id=8063fe6e-44e9-4d35-aa13-5e708b03cc1d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: Started crio-conmon-678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b.scope. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: Started libcontainer container 678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b. Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.273589678Z" level=info msg="Created container 678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b: openshift-multus/multus-additional-cni-plugins-nqwsg/cni-plugins" id=b268ff6d-d29a-4fbe-859b-3c120da55302 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.274037099Z" level=info msg="Starting container: 678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b" id=67da571c-8517-4adb-8725-3a925b78f18d name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277486 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release\") pod \"ffd2cee3-1bae-4941-8015-2b3ade383d85\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277551 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir\") pod \"ffd2cee3-1bae-4941-8015-2b3ade383d85\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277559 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release" (OuterVolumeSpecName: "os-release") pod "ffd2cee3-1bae-4941-8015-2b3ade383d85" (UID: "ffd2cee3-1bae-4941-8015-2b3ade383d85"). InnerVolumeSpecName "os-release". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277588 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4glw\" (UniqueName: \"kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw\") pod \"ffd2cee3-1bae-4941-8015-2b3ade383d85\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277610 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir\") pod \"ffd2cee3-1bae-4941-8015-2b3ade383d85\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277638 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy\") pod \"ffd2cee3-1bae-4941-8015-2b3ade383d85\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277713 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin\") pod \"ffd2cee3-1bae-4941-8015-2b3ade383d85\" (UID: \"ffd2cee3-1bae-4941-8015-2b3ade383d85\") " Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277608 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir" (OuterVolumeSpecName: "system-cni-dir") pod "ffd2cee3-1bae-4941-8015-2b3ade383d85" (UID: "ffd2cee3-1bae-4941-8015-2b3ade383d85"). InnerVolumeSpecName "system-cni-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277855 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir" (OuterVolumeSpecName: "multus-cni-dir") pod "ffd2cee3-1bae-4941-8015-2b3ade383d85" (UID: "ffd2cee3-1bae-4941-8015-2b3ade383d85"). InnerVolumeSpecName "multus-cni-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277903 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin" (OuterVolumeSpecName: "cnibin") pod "ffd2cee3-1bae-4941-8015-2b3ade383d85" (UID: "ffd2cee3-1bae-4941-8015-2b3ade383d85"). InnerVolumeSpecName "cnibin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277938 2112 reconciler.go:399] "Volume detached for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-system-cni-dir\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277957 2112 reconciler.go:399] "Volume detached for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-multus-cni-dir\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.277973 2112 reconciler.go:399] "Volume detached for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-os-release\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:20:54.278032 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/ffd2cee3-1bae-4941-8015-2b3ade383d85/volumes/kubernetes.io~configmap/cni-binary-copy: clearQuota called, but quotas disabled Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.278197 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "ffd2cee3-1bae-4941-8015-2b3ade383d85" (UID: "ffd2cee3-1bae-4941-8015-2b3ade383d85"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.292130629Z" level=info msg="Started container" PID=66868 containerID=678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b description=openshift-multus/multus-additional-cni-plugins-nqwsg/cni-plugins id=67da571c-8517-4adb-8725-3a925b78f18d name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.292999 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw" (OuterVolumeSpecName: "kube-api-access-v4glw") pod "ffd2cee3-1bae-4941-8015-2b3ade383d85" (UID: "ffd2cee3-1bae-4941-8015-2b3ade383d85"). InnerVolumeSpecName "kube-api-access-v4glw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.298445101Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_fff05065-25a8-492b-a99c-f39cbd004252\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.307801503Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.307827214Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.341751839Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bandwidth\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.353513972Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.353537215Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.353553246Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bridge\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.362455554Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.362477350Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.362491605Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/dhcp\"" Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: crio-678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b.scope: Succeeded. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: crio-678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b.scope: Consumed 81ms CPU time Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: crio-conmon-678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b.scope: Succeeded. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: crio-conmon-678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b.scope: Consumed 24ms CPU time Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.375794492Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.375818963Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.375833536Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/dummy\"" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.379019 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-v4glw\" (UniqueName: \"kubernetes.io/projected/ffd2cee3-1bae-4941-8015-2b3ade383d85-kube-api-access-v4glw\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.379050 2112 reconciler.go:399] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffd2cee3-1bae-4941-8015-2b3ade383d85-cni-binary-copy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.379068 2112 reconciler.go:399] "Volume detached for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffd2cee3-1bae-4941-8015-2b3ade383d85-cnibin\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.385368098Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.385396235Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.385410229Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/firewall\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.395936591Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.395961552Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.395977607Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/host-device\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.405075007Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.405098681Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.405113251Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/host-local\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.413169324Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.413193237Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.413206448Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ipvlan\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.421956886Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.421982644Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.421997192Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/loopback\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.430687080Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.430721493Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.430756908Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/macvlan\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.440096172Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.440137589Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.440152351Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/portmap\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.448713223Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.448735220Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.448748638Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ptp\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.456329028Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.456350765Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.456364793Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/sbr\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.463685899Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.463705811Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.463718876Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/static\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.471345625Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.471367003Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.471379203Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/tap\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.478577833Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.478595107Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.478606698Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/tuning\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.486292028Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.486310106Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.486321186Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/vlan\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.493591525Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.493608671Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.493619780Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/vrf\"" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.501288003Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.501305317Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.501314711Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_fff05065-25a8-492b-a99c-f39cbd004252\"" Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.927650 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gr76d" event=&{ID:ffd2cee3-1bae-4941-8015-2b3ade383d85 Type:ContainerDied Data:cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6} Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.927713 2112 scope.go:115] "RemoveContainer" containerID="4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5" Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.929334814Z" level=info msg="Removing container: 4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5" id=07312bfa-388e-4cbc-b712-bdaeceb6b0f4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.929590 2112 generic.go:296] "Generic (PLEG): container finished" podID=7f25c5a9-b9c7-4220-a892-362cf6b33878 containerID="678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b" exitCode=0 Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.929621 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerDied Data:678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b} Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.930236599Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:ed68da48de609d277c0cbc6d388e49ba322c0d92e636f2d708274855abaa9c6b" id=330757a8-f333-49e8-a32e-52679dc1e168 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.930514119Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:ed68da48de609d277c0cbc6d388e49ba322c0d92e636f2d708274855abaa9c6b not found" id=330757a8-f333-49e8-a32e-52679dc1e168 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.931198644Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:ed68da48de609d277c0cbc6d388e49ba322c0d92e636f2d708274855abaa9c6b" id=7c1aae23-6b17-47e8-9cb3-15cc5ea25441 name=/runtime.v1.ImageService/PullImage Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.932191335Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:ed68da48de609d277c0cbc6d388e49ba322c0d92e636f2d708274855abaa9c6b\"" Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podffd2cee3_1bae_4941_8015_2b3ade383d85.slice. Feb 23 17:20:54 ip-10-0-136-68 systemd[1]: kubepods-burstable-podffd2cee3_1bae_4941_8015_2b3ade383d85.slice: Consumed 26.626s CPU time Feb 23 17:20:54 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:54.949463396Z" level=info msg="Removed container 4ad960e2f8c7b03ac82c2beac81caa9736f05de6b29400c654f753cd3b0f65f5: openshift-multus/multus-gr76d/kube-multus" id=07312bfa-388e-4cbc-b712-bdaeceb6b0f4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:20:54 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:54.999231 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-multus/multus-gr76d] Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.000007 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-multus/multus-gr76d] Feb 23 17:20:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6-userdata-shm.mount: Succeeded. Feb 23 17:20:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:20:55 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ffd2cee3\x2d1bae\x2d4941\x2d8015\x2d2b3ade383d85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv4glw.mount: Succeeded. Feb 23 17:20:55 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ffd2cee3\x2d1bae\x2d4941\x2d8015\x2d2b3ade383d85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv4glw.mount: Consumed 0 CPU time Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.045040 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-multus/multus-4f66c] Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.045073 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:20:55.045124 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ffd2cee3-1bae-4941-8015-2b3ade383d85" containerName="kube-multus" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.045131 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd2cee3-1bae-4941-8015-2b3ade383d85" containerName="kube-multus" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.045164 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="ffd2cee3-1bae-4941-8015-2b3ade383d85" containerName="kube-multus" Feb 23 17:20:55 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod9eb4a126_482c_4458_b901_e2e7a15dfd93.slice. Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.185839 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-os-release\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.185873 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fbl\" (UniqueName: \"kubernetes.io/projected/9eb4a126-482c-4458-b901-e2e7a15dfd93-kube-api-access-b4fbl\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.185901 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-system-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.185970 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9eb4a126-482c-4458-b901-e2e7a15dfd93-cni-binary-copy\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.186004 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-cnibin\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.186029 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-multus-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287249 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-os-release\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287295 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fbl\" (UniqueName: \"kubernetes.io/projected/9eb4a126-482c-4458-b901-e2e7a15dfd93-kube-api-access-b4fbl\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287320 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-system-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287345 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9eb4a126-482c-4458-b901-e2e7a15dfd93-cni-binary-copy\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287373 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-cnibin\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287384 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-os-release\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287397 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-multus-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287465 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-multus-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287520 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-cnibin\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287593 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-system-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.287891 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9eb4a126-482c-4458-b901-e2e7a15dfd93-cni-binary-copy\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.305808 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fbl\" (UniqueName: \"kubernetes.io/projected/9eb4a126-482c-4458-b901-e2e7a15dfd93-kube-api-access-b4fbl\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.356985 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4f66c" Feb 23 17:20:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:55.357355946Z" level=info msg="Running pod sandbox: openshift-multus/multus-4f66c/POD" id=3f4e4b30-ef42-448a-b997-781fd2bf7a94 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:55.357412328Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:20:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:55.374287172Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=3f4e4b30-ef42-448a-b997-781fd2bf7a94 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:20:55.377626 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eb4a126_482c_4458_b901_e2e7a15dfd93.slice/crio-5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a.scope WatchSource:0}: Error finding container 5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a: Status 404 returned error can't find the container with id 5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a Feb 23 17:20:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:55.378539655Z" level=info msg="Ran pod sandbox 5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a with infra container: openshift-multus/multus-4f66c/POD" id=3f4e4b30-ef42-448a-b997-781fd2bf7a94 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:20:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:55.379277997Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=107803c2-1956-4c56-bcdd-3a14ea8d20f1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:55.379423187Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244 not found" id=107803c2-1956-4c56-bcdd-3a14ea8d20f1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:20:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:55.379954107Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=db4c1a1b-8f49-4039-a17a-5197bdd8cbbb name=/runtime.v1.ImageService/PullImage Feb 23 17:20:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:55.380824304Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244\"" Feb 23 17:20:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:55.932254 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f66c" event=&{ID:9eb4a126-482c-4458-b901-e2e7a15dfd93 Type:ContainerStarted Data:5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a} Feb 23 17:20:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:20:56.118653 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ffd2cee3-1bae-4941-8015-2b3ade383d85 path="/var/lib/kubelet/pods/ffd2cee3-1bae-4941-8015-2b3ade383d85/volumes" Feb 23 17:20:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:56.334964252Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:ed68da48de609d277c0cbc6d388e49ba322c0d92e636f2d708274855abaa9c6b\"" Feb 23 17:20:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00375|connmgr|INFO|br-int<->unix#2: 198 flow_mods in the 53 s starting 57 s ago (102 adds, 96 deletes) Feb 23 17:20:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:20:56.625157936Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244\"" Feb 23 17:20:59 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.3gP9fC.mount: Succeeded. Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.056280321Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:ed68da48de609d277c0cbc6d388e49ba322c0d92e636f2d708274855abaa9c6b" id=7c1aae23-6b17-47e8-9cb3-15cc5ea25441 name=/runtime.v1.ImageService/PullImage Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.057020063Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:ed68da48de609d277c0cbc6d388e49ba322c0d92e636f2d708274855abaa9c6b" id=f5a8de50-fd6a-49d7-9a29-baa963df3a61 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.058220702Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a0594014c9ef6f20e5776c41a70a2a1b50676b13909ee1c32a87253b096f8339,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:ed68da48de609d277c0cbc6d388e49ba322c0d92e636f2d708274855abaa9c6b],Size_:354430079,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f5a8de50-fd6a-49d7-9a29-baa963df3a61 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.058883560Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-nqwsg/bond-cni-plugin" id=f821c043-651d-4374-ab76-bc6396079aff name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.058973501Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:21:00 ip-10-0-136-68 systemd[1]: Started crio-conmon-d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c.scope. Feb 23 17:21:00 ip-10-0-136-68 systemd[1]: Started libcontainer container d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c. Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.253371472Z" level=info msg="Created container d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c: openshift-multus/multus-additional-cni-plugins-nqwsg/bond-cni-plugin" id=f821c043-651d-4374-ab76-bc6396079aff name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.253833060Z" level=info msg="Starting container: d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c" id=8cd7566f-914f-4e39-afab-74a6cf285d07 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.262719109Z" level=info msg="Started container" PID=67124 containerID=d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c description=openshift-multus/multus-additional-cni-plugins-nqwsg/bond-cni-plugin id=8cd7566f-914f-4e39-afab-74a6cf285d07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.272737925Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_ca534b33-edb0-4bc2-bcf5-1b521b5a8f61\"" Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.286783350Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.286819206Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.286836969Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/bond\"" Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.295721521Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.295745332Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.295756823Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_ca534b33-edb0-4bc2-bcf5-1b521b5a8f61\"" Feb 23 17:21:00 ip-10-0-136-68 systemd[1]: crio-d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c.scope: Succeeded. Feb 23 17:21:00 ip-10-0-136-68 systemd[1]: crio-d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c.scope: Consumed 40ms CPU time Feb 23 17:21:00 ip-10-0-136-68 systemd[1]: crio-conmon-d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c.scope: Succeeded. Feb 23 17:21:00 ip-10-0-136-68 systemd[1]: crio-conmon-d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c.scope: Consumed 25ms CPU time Feb 23 17:21:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:00.941551 2112 generic.go:296] "Generic (PLEG): container finished" podID=7f25c5a9-b9c7-4220-a892-362cf6b33878 containerID="d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c" exitCode=0 Feb 23 17:21:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:00.941589 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerDied Data:d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c} Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.942199951Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:26d23cd931fe8b128262dcabd42370499cd80727283fedaeb552ee236c5a70fd" id=aac958d2-e407-469f-8bc8-43c8596932fb name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.942421813Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:26d23cd931fe8b128262dcabd42370499cd80727283fedaeb552ee236c5a70fd not found" id=aac958d2-e407-469f-8bc8-43c8596932fb name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.942872861Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:26d23cd931fe8b128262dcabd42370499cd80727283fedaeb552ee236c5a70fd" id=534b852f-d6d6-4b3d-ae78-64efcd3ecccb name=/runtime.v1.ImageService/PullImage Feb 23 17:21:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:00.943734488Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:26d23cd931fe8b128262dcabd42370499cd80727283fedaeb552ee236c5a70fd\"" Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.742000639Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=db4c1a1b-8f49-4039-a17a-5197bdd8cbbb name=/runtime.v1.ImageService/PullImage Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.742652868Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=20ea5435-8bf5-46c6-afb8-1a4513f1821a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.743865449Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:f1e6876f2bf1f7a3094dceab6324c9f309c5929c81e15ae5c242900fb6f03188,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244],Size_:489063224,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=20ea5435-8bf5-46c6-afb8-1a4513f1821a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.744552584Z" level=info msg="Creating container: openshift-multus/multus-4f66c/kube-multus" id=6932d753-41ff-4ac7-926b-f82e509341e7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.744643846Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:21:01 ip-10-0-136-68 systemd[1]: Started crio-conmon-80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075.scope. Feb 23 17:21:01 ip-10-0-136-68 systemd[1]: Started libcontainer container 80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075. Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.885539547Z" level=info msg="Created container 80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075: openshift-multus/multus-4f66c/kube-multus" id=6932d753-41ff-4ac7-926b-f82e509341e7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.885968044Z" level=info msg="Starting container: 80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075" id=c2ed5e4c-5534-45dc-8efd-0838168136f4 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.892859770Z" level=info msg="Started container" PID=67224 containerID=80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075 description=openshift-multus/multus-4f66c/kube-multus id=c2ed5e4c-5534-45dc-8efd-0838168136f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.899232372Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_74e056a8-deda-4e79-8940-0e866a7efd0a\"" Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.908404474Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.908429422Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.925407093Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/multus\"" Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.935312407Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.935332836Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:01 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:01.935349509Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_74e056a8-deda-4e79-8940-0e866a7efd0a\"" Feb 23 17:21:01 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:01.945535 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f66c" event=&{ID:9eb4a126-482c-4458-b901-e2e7a15dfd93 Type:ContainerStarted Data:80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075} Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.062031248Z" level=info msg="CNI monitoring event REMOVE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.073627504Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.073645612Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.073694243Z" level=info msg="CNI monitoring event CREATE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.078525406Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:26d23cd931fe8b128262dcabd42370499cd80727283fedaeb552ee236c5a70fd\"" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.082342862Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.082361697Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.082371940Z" level=info msg="CNI monitoring event WRITE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.089969545Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.089989469Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:02 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:02.089998588Z" level=info msg="CNI monitoring event CHMOD \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 17:21:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00376|connmgr|INFO|br-ex<->unix#1358: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.310263334Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:26d23cd931fe8b128262dcabd42370499cd80727283fedaeb552ee236c5a70fd" id=534b852f-d6d6-4b3d-ae78-64efcd3ecccb name=/runtime.v1.ImageService/PullImage Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.311027521Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:26d23cd931fe8b128262dcabd42370499cd80727283fedaeb552ee236c5a70fd" id=f97ef47b-654d-4e8e-8697-5905d89bd7f9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.312259989Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2078dbbf32ab796b061a8902eb123c389b4a58ab19a1103edbb8f6d0dc53d16a,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:26d23cd931fe8b128262dcabd42370499cd80727283fedaeb552ee236c5a70fd],Size_:360076247,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=f97ef47b-654d-4e8e-8697-5905d89bd7f9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.312867796Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-nqwsg/routeoverride-cni" id=bc4595da-e34d-462d-a180-44f2e9100cc9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.312944489Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:21:06 ip-10-0-136-68 systemd[1]: Started crio-conmon-ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19.scope. Feb 23 17:21:06 ip-10-0-136-68 systemd[1]: Started libcontainer container ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19. Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.507333290Z" level=info msg="Created container ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19: openshift-multus/multus-additional-cni-plugins-nqwsg/routeoverride-cni" id=bc4595da-e34d-462d-a180-44f2e9100cc9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.507820976Z" level=info msg="Starting container: ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19" id=1b26dae4-8507-4222-8984-c0d3c4633fb1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.514987494Z" level=info msg="Started container" PID=67483 containerID=ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19 description=openshift-multus/multus-additional-cni-plugins-nqwsg/routeoverride-cni id=1b26dae4-8507-4222-8984-c0d3c4633fb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.520616709Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_7400c7d7-6e41-469b-8bcb-4328bec4e038\"" Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.530753886Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.530774839Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.530785390Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/route-override\"" Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.538925807Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.538943022Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.538952837Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_7400c7d7-6e41-469b-8bcb-4328bec4e038\"" Feb 23 17:21:06 ip-10-0-136-68 systemd[1]: crio-ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19.scope: Succeeded. Feb 23 17:21:06 ip-10-0-136-68 systemd[1]: crio-ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19.scope: Consumed 34ms CPU time Feb 23 17:21:06 ip-10-0-136-68 systemd[1]: crio-conmon-ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19.scope: Succeeded. Feb 23 17:21:06 ip-10-0-136-68 systemd[1]: crio-conmon-ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19.scope: Consumed 26ms CPU time Feb 23 17:21:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:06.955647 2112 generic.go:296] "Generic (PLEG): container finished" podID=7f25c5a9-b9c7-4220-a892-362cf6b33878 containerID="ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19" exitCode=0 Feb 23 17:21:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:06.955716 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerDied Data:ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19} Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.956389698Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c" id=2dc4ddb1-edb6-4495-a6a4-e4c0d64d7427 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.956564722Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c not found" id=2dc4ddb1-edb6-4495-a6a4-e4c0d64d7427 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.957152914Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c" id=0998bc9c-63ec-479a-8257-8d289cda949c name=/runtime.v1.ImageService/PullImage Feb 23 17:21:06 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:06.958071032Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c\"" Feb 23 17:21:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:08.357115280Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c\"" Feb 23 17:21:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:13.855475643Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c" id=0998bc9c-63ec-479a-8257-8d289cda949c name=/runtime.v1.ImageService/PullImage Feb 23 17:21:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:13.856263658Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c" id=5cbd988b-1d0e-40d9-b0e6-fd9f8ab0c413 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:13.858433519Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:efad93007137457139d15af51e0db65f198da2610fd21f63574f6fcab101c2cc,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c],Size_:446125278,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5cbd988b-1d0e-40d9-b0e6-fd9f8ab0c413 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:13.859317807Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-nqwsg/whereabouts-cni-bincopy" id=f972187d-f5cd-48c7-bd95-3c2f728b198f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:13 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:13.859412520Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:21:13 ip-10-0-136-68 systemd[1]: Started crio-conmon-cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4.scope. Feb 23 17:21:13 ip-10-0-136-68 systemd[1]: Started libcontainer container cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4. Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.038914021Z" level=info msg="Created container cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4: openshift-multus/multus-additional-cni-plugins-nqwsg/whereabouts-cni-bincopy" id=f972187d-f5cd-48c7-bd95-3c2f728b198f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.039284319Z" level=info msg="Starting container: cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4" id=79575c9d-96f3-456c-ad48-57f810160238 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.046808932Z" level=info msg="Started container" PID=67639 containerID=cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4 description=openshift-multus/multus-additional-cni-plugins-nqwsg/whereabouts-cni-bincopy id=79575c9d-96f3-456c-ad48-57f810160238 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.052792435Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_c2601e8f-acdd-4212-b6d0-21f72cd9fd90\"" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.063238681Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.063260710Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.104344466Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/ip-control-loop\"" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.114182785Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.114203714Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.114213876Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/whereabouts\"" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.122839270Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.122860866Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.122874953Z" level=info msg="CNI monitoring event REMOVE \"/var/lib/cni/bin/upgrade_c2601e8f-acdd-4212-b6d0-21f72cd9fd90\"" Feb 23 17:21:14 ip-10-0-136-68 systemd[1]: crio-cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4.scope: Succeeded. Feb 23 17:21:14 ip-10-0-136-68 systemd[1]: crio-cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4.scope: Consumed 83ms CPU time Feb 23 17:21:14 ip-10-0-136-68 systemd[1]: crio-conmon-cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4.scope: Succeeded. Feb 23 17:21:14 ip-10-0-136-68 systemd[1]: crio-conmon-cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4.scope: Consumed 24ms CPU time Feb 23 17:21:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:14.972547 2112 generic.go:296] "Generic (PLEG): container finished" podID=7f25c5a9-b9c7-4220-a892-362cf6b33878 containerID="cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4" exitCode=0 Feb 23 17:21:14 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:14.972589 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerDied Data:cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4} Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.973189767Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c" id=75884896-4875-4151-b650-d217dd2f51f9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.973359061Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:efad93007137457139d15af51e0db65f198da2610fd21f63574f6fcab101c2cc,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c],Size_:446125278,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=75884896-4875-4151-b650-d217dd2f51f9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.974016826Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c" id=18d41166-d404-4e70-8928-c5b07c441919 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.974169891Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:efad93007137457139d15af51e0db65f198da2610fd21f63574f6fcab101c2cc,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:e130337cb35cbff3060890d491ba1f7fc830dbd5685efd28040b788caf6b745c],Size_:446125278,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=18d41166-d404-4e70-8928-c5b07c441919 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.974998737Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-nqwsg/whereabouts-cni" id=a22f95fe-ae43-4e59-941d-b0dddbc34a37 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:14 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:14.975084784Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:21:15 ip-10-0-136-68 systemd[1]: Started crio-conmon-5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef.scope. Feb 23 17:21:15 ip-10-0-136-68 systemd[1]: Started libcontainer container 5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef. Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.159489129Z" level=info msg="Created container 5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef: openshift-multus/multus-additional-cni-plugins-nqwsg/whereabouts-cni" id=a22f95fe-ae43-4e59-941d-b0dddbc34a37 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.159897370Z" level=info msg="Starting container: 5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef" id=20ed115f-9c46-4357-9259-7965b79b57da name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.166638086Z" level=info msg="Started container" PID=67742 containerID=5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef description=openshift-multus/multus-additional-cni-plugins-nqwsg/whereabouts-cni id=20ed115f-9c46-4357-9259-7965b79b57da name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 Feb 23 17:21:15 ip-10-0-136-68 systemd[1]: crio-5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef.scope: Succeeded. Feb 23 17:21:15 ip-10-0-136-68 systemd[1]: crio-5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef.scope: Consumed 33ms CPU time Feb 23 17:21:15 ip-10-0-136-68 systemd[1]: crio-conmon-5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef.scope: Succeeded. Feb 23 17:21:15 ip-10-0-136-68 systemd[1]: crio-conmon-5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef.scope: Consumed 25ms CPU time Feb 23 17:21:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:15.976439 2112 generic.go:296] "Generic (PLEG): container finished" podID=7f25c5a9-b9c7-4220-a892-362cf6b33878 containerID="5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef" exitCode=0 Feb 23 17:21:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:15.976471 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerDied Data:5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef} Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.977084029Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=c29510eb-0db7-44bc-b109-5f710318ddb9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.977251500Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:f1e6876f2bf1f7a3094dceab6324c9f309c5929c81e15ae5c242900fb6f03188,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244],Size_:489063224,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c29510eb-0db7-44bc-b109-5f710318ddb9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.977816316Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=982919ca-9eeb-45f7-a85a-342268c00c8b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.977961242Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:f1e6876f2bf1f7a3094dceab6324c9f309c5929c81e15ae5c242900fb6f03188,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244],Size_:489063224,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=982919ca-9eeb-45f7-a85a-342268c00c8b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.978525280Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-nqwsg/kube-multus-additional-cni-plugins" id=3497e17c-9104-450e-88d9-d9ee5d0e964e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:15.978622837Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:21:16 ip-10-0-136-68 systemd[1]: Started crio-conmon-286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f.scope. Feb 23 17:21:16 ip-10-0-136-68 systemd[1]: Started libcontainer container 286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f. Feb 23 17:21:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:16.133574352Z" level=info msg="Created container 286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f: openshift-multus/multus-additional-cni-plugins-nqwsg/kube-multus-additional-cni-plugins" id=3497e17c-9104-450e-88d9-d9ee5d0e964e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:21:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:16.133974124Z" level=info msg="Starting container: 286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f" id=fc743f3e-8db3-404f-8648-5f98e0664f2d name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:21:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:16.141577571Z" level=info msg="Started container" PID=67812 containerID=286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f description=openshift-multus/multus-additional-cni-plugins-nqwsg/kube-multus-additional-cni-plugins id=fc743f3e-8db3-404f-8648-5f98e0664f2d name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 Feb 23 17:21:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:16.980651 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerStarted Data:286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f} Feb 23 17:21:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00377|connmgr|INFO|br-ex<->unix#1363: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:21:35 ip-10-0-136-68 systemd[1]: run-runc-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923-runc.OPeJER.mount: Succeeded. Feb 23 17:21:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00378|connmgr|INFO|br-ex<->unix#1371: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:21:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00379|connmgr|INFO|br-ex<->unix#1376: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:21:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:52.344789941Z" level=info msg="Stopping pod sandbox: cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6" id=64a084ce-6a50-43ab-9895-f19d20c4a654 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:21:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:52.344838658Z" level=info msg="Stopped pod sandbox (already stopped): cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6" id=64a084ce-6a50-43ab-9895-f19d20c4a654 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:21:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:52.345361411Z" level=info msg="Removing pod sandbox: cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6" id=a220f4e6-5d08-490b-a800-aa71e4865814 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:21:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:21:52.359859799Z" level=info msg="Removed pod sandbox: cf61acb2b8c00d20091b3cda5994fcea67d4dba436161f7e798fd749cbb686f6" id=a220f4e6-5d08-490b-a800-aa71e4865814 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:21:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:21:52.360838 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d\": container with ID starting with 29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d not found: ID does not exist" containerID="29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d" Feb 23 17:21:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:21:52.360880 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d" err="rpc error: code = NotFound desc = could not find container \"29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d\": container with ID starting with 29882cfc4e7f08a2fc37c946bff27fcbd17f38017f7f183aa6268a1121e4478d not found: ID does not exist" Feb 23 17:21:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00380|connmgr|INFO|br-int<->unix#2: 38 flow_mods in the 36 s starting 59 s ago (18 adds, 20 deletes) Feb 23 17:22:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00381|connmgr|INFO|br-ex<->unix#1384: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:22:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00382|connmgr|INFO|br-ex<->unix#1389: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:22:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00383|connmgr|INFO|br-ex<->unix#1397: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:22:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:22:46.018982918Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=e94aff9f-ef1d-4a5d-bf77-4dad4107ff37 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:22:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:22:46.019181325Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e94aff9f-ef1d-4a5d-bf77-4dad4107ff37 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:22:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00384|connmgr|INFO|br-ex<->unix#1402: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:22:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00385|connmgr|INFO|br-int<->unix#2: 31 flow_mods in the last 42 s (15 adds, 16 deletes) Feb 23 17:23:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00386|connmgr|INFO|br-ex<->unix#1411: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:23:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00387|connmgr|INFO|br-ex<->unix#1416: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:23:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00388|connmgr|INFO|br-ex<->unix#1424: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:23:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:23:37.720775 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w] Feb 23 17:23:37 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:23:37.720932 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" podUID=fb8cddd7-8398-4edb-b1cc-362df7469281 containerName="ovnkube-upgrades-prepuller" containerID="cri-o://02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000" gracePeriod=30 Feb 23 17:23:37 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:23:37.721244071Z" level=info msg="Stopping container: 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000 (timeout: 30s)" id=cfa8b92f-1003-4be5-bfad-ce8a7c727bca name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:23:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00389|connmgr|INFO|br-ex<->unix#1429: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:23:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00390|connmgr|INFO|br-int<->unix#2: 13 flow_mods in the 41 s starting 59 s ago (7 adds, 6 deletes) Feb 23 17:24:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00391|connmgr|INFO|br-ex<->unix#1437: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:24:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:07.740500407Z" level=warning msg="Stopping container 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000 with stop signal timed out: timeout reached after 30 seconds waiting for container process to exit" id=cfa8b92f-1003-4be5-bfad-ce8a7c727bca name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:24:07 ip-10-0-136-68 conmon[66741]: conmon 02da5be1d43d55aee72a : container 66765 exited with status 137 Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: crio-conmon-02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000.scope: Succeeded. Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: crio-conmon-02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000.scope: Consumed 24ms CPU time Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: crio-02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000.scope: Succeeded. Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: crio-02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000.scope: Consumed 28ms CPU time Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3ce1737175600049a1e04ee40348d14cbfc83761f86db770fad29b51fb508f55-merged.mount: Succeeded. Feb 23 17:24:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:07.911716057Z" level=info msg="Stopped container 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000: openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w/ovnkube-upgrades-prepuller" id=cfa8b92f-1003-4be5-bfad-ce8a7c727bca name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:24:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:07.912166748Z" level=info msg="Stopping pod sandbox: 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d" id=7bf3b11f-6a92-4a96-899d-aefcb96d5806 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ed1e7d226df47dcf839d5a2b11c218b73a3760578f09d7aba7899487331e6d7e-merged.mount: Succeeded. Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: run-utsns-88ed11e4\x2dbfb9\x2d42af\x2db6fb\x2dfe7295a5e580.mount: Succeeded. Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: run-ipcns-88ed11e4\x2dbfb9\x2d42af\x2db6fb\x2dfe7295a5e580.mount: Succeeded. Feb 23 17:24:07 ip-10-0-136-68 systemd[1]: run-netns-88ed11e4\x2dbfb9\x2d42af\x2db6fb\x2dfe7295a5e580.mount: Succeeded. Feb 23 17:24:07 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:07.960845773Z" level=info msg="Stopped pod sandbox: 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d" id=7bf3b11f-6a92-4a96-899d-aefcb96d5806 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.116550 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd5tc\" (UniqueName: \"kubernetes.io/projected/fb8cddd7-8398-4edb-b1cc-362df7469281-kube-api-access-cd5tc\") pod \"fb8cddd7-8398-4edb-b1cc-362df7469281\" (UID: \"fb8cddd7-8398-4edb-b1cc-362df7469281\") " Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.131101 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8cddd7-8398-4edb-b1cc-362df7469281-kube-api-access-cd5tc" (OuterVolumeSpecName: "kube-api-access-cd5tc") pod "fb8cddd7-8398-4edb-b1cc-362df7469281" (UID: "fb8cddd7-8398-4edb-b1cc-362df7469281"). InnerVolumeSpecName "kube-api-access-cd5tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.217301 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-cd5tc\" (UniqueName: \"kubernetes.io/projected/fb8cddd7-8398-4edb-b1cc-362df7469281-kube-api-access-cd5tc\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.318998 2112 generic.go:296] "Generic (PLEG): container finished" podID=fb8cddd7-8398-4edb-b1cc-362df7469281 containerID="02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000" exitCode=137 Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.319040 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" event=&{ID:fb8cddd7-8398-4edb-b1cc-362df7469281 Type:ContainerDied Data:02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000} Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.319069 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w" event=&{ID:fb8cddd7-8398-4edb-b1cc-362df7469281 Type:ContainerDied Data:8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d} Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.319085 2112 scope.go:115] "RemoveContainer" containerID="02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000" Feb 23 17:24:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:08.320202802Z" level=info msg="Removing container: 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000" id=120fd65a-97b6-4f25-a872-5a3bacfb7a76 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:24:08 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-besteffort-podfb8cddd7_8398_4edb_b1cc_362df7469281.slice. Feb 23 17:24:08 ip-10-0-136-68 systemd[1]: kubepods-besteffort-podfb8cddd7_8398_4edb_b1cc_362df7469281.slice: Consumed 53ms CPU time Feb 23 17:24:08 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:08.339985227Z" level=info msg="Removed container 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000: openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w/ovnkube-upgrades-prepuller" id=120fd65a-97b6-4f25-a872-5a3bacfb7a76 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.340137 2112 scope.go:115] "RemoveContainer" containerID="02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000" Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:24:08.340372 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000\": container with ID starting with 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000 not found: ID does not exist" containerID="02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000" Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.340401 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000} err="failed to get container status \"02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000\": rpc error: code = NotFound desc = could not find container \"02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000\": container with ID starting with 02da5be1d43d55aee72a3fbeb2753a10211c68fbcf0c6c980cb0b5f8a21ab000 not found: ID does not exist" Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.342470 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w] Feb 23 17:24:08 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:08.354153 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-ovn-kubernetes/ovnkube-upgrades-prepuller-nzb8w] Feb 23 17:24:08 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d-userdata-shm.mount: Succeeded. Feb 23 17:24:08 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-fb8cddd7\x2d8398\x2d4edb\x2db1cc\x2d362df7469281-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcd5tc.mount: Succeeded. Feb 23 17:24:10 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:10.120342 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fb8cddd7-8398-4edb-b1cc-362df7469281 path="/var/lib/kubelet/pods/fb8cddd7-8398-4edb-b1cc-362df7469281/volumes" Feb 23 17:24:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:18.066417 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-network-diagnostics/network-check-target-b2mxx] Feb 23 17:24:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:18.066587 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-network-diagnostics/network-check-target-b2mxx" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c containerName="network-check-target-container" containerID="cri-o://204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef" gracePeriod=10 Feb 23 17:24:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:18.066912642Z" level=info msg="Stopping container: 204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef (timeout: 10s)" id=deb8e229-fa82-4b14-b8fb-9e135ae19cf4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:24:18 ip-10-0-136-68 conmon[4605]: conmon 204a046b518e2f02bc69 : container 4636 exited with status 2 Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: crio-204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef.scope: Succeeded. Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: crio-204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef.scope: Consumed 203ms CPU time Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: crio-conmon-204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef.scope: Succeeded. Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: crio-conmon-204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef.scope: Consumed 25ms CPU time Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-caefe423df89a98d04252eca06b65c75364a163383a37112e149da466d18a315-merged.mount: Succeeded. Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-caefe423df89a98d04252eca06b65c75364a163383a37112e149da466d18a315-merged.mount: Consumed 0 CPU time Feb 23 17:24:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:18.251671827Z" level=info msg="Stopped container 204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef: openshift-network-diagnostics/network-check-target-b2mxx/network-check-target-container" id=deb8e229-fa82-4b14-b8fb-9e135ae19cf4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:24:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:18.252104690Z" level=info msg="Stopping pod sandbox: 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b" id=e8bf848f-6f81-49aa-b6ba-fd7c8c21aba2 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:24:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:18.252273059Z" level=info msg="Got pod network &{Name:network-check-target-b2mxx Namespace:openshift-network-diagnostics ID:5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b UID:5acce570-9f3b-4dab-9fed-169a4c110f8c NetNS:/var/run/netns/755b1556-e233-443e-89fd-f6504a4e73db Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:24:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:18.252378995Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-b2mxx from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:24:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:18.339379 2112 generic.go:296] "Generic (PLEG): container finished" podID=5acce570-9f3b-4dab-9fed-169a4c110f8c containerID="204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef" exitCode=2 Feb 23 17:24:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:18.339420 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-b2mxx" event=&{ID:5acce570-9f3b-4dab-9fed-169a4c110f8c Type:ContainerDied Data:204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef} Feb 23 17:24:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00392|bridge|INFO|bridge br-int: deleted interface 5244fda541f5dc9 on port 10 Feb 23 17:24:18 ip-10-0-136-68 kernel: device 5244fda541f5dc9 left promiscuous mode Feb 23 17:24:18 ip-10-0-136-68 crio[2062]: 2023-02-23T17:24:18Z [verbose] Del: openshift-network-diagnostics:network-check-target-b2mxx:5acce570-9f3b-4dab-9fed-169a4c110f8c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:24:18 ip-10-0-136-68 crio[2062]: I0223 17:24:18.391012 70117 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-da257f8b91d11af2dc6e09f6c9ea6aed7ca74606680cacbad072ee17a1f8283e-merged.mount: Succeeded. Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-da257f8b91d11af2dc6e09f6c9ea6aed7ca74606680cacbad072ee17a1f8283e-merged.mount: Consumed 0 CPU time Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: run-utsns-755b1556\x2de233\x2d443e\x2d89fd\x2df6504a4e73db.mount: Succeeded. Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: run-utsns-755b1556\x2de233\x2d443e\x2d89fd\x2df6504a4e73db.mount: Consumed 0 CPU time Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: run-ipcns-755b1556\x2de233\x2d443e\x2d89fd\x2df6504a4e73db.mount: Succeeded. Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: run-ipcns-755b1556\x2de233\x2d443e\x2d89fd\x2df6504a4e73db.mount: Consumed 0 CPU time Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: run-netns-755b1556\x2de233\x2d443e\x2d89fd\x2df6504a4e73db.mount: Succeeded. Feb 23 17:24:18 ip-10-0-136-68 systemd[1]: run-netns-755b1556\x2de233\x2d443e\x2d89fd\x2df6504a4e73db.mount: Consumed 0 CPU time Feb 23 17:24:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:18.969702548Z" level=info msg="Stopped pod sandbox: 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b" id=e8bf848f-6f81-49aa-b6ba-fd7c8c21aba2 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.093551 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") pod \"5acce570-9f3b-4dab-9fed-169a4c110f8c\" (UID: \"5acce570-9f3b-4dab-9fed-169a4c110f8c\") " Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.107907 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww" (OuterVolumeSpecName: "kube-api-access-7nhww") pod "5acce570-9f3b-4dab-9fed-169a4c110f8c" (UID: "5acce570-9f3b-4dab-9fed-169a4c110f8c"). InnerVolumeSpecName "kube-api-access-7nhww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.194040 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-7nhww\" (UniqueName: \"kubernetes.io/projected/5acce570-9f3b-4dab-9fed-169a4c110f8c-kube-api-access-7nhww\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:24:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b-userdata-shm.mount: Succeeded. Feb 23 17:24:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:24:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-5acce570\x2d9f3b\x2d4dab\x2d9fed\x2d169a4c110f8c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7nhww.mount: Succeeded. Feb 23 17:24:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-5acce570\x2d9f3b\x2d4dab\x2d9fed\x2d169a4c110f8c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7nhww.mount: Consumed 0 CPU time Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.342190 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-b2mxx" event=&{ID:5acce570-9f3b-4dab-9fed-169a4c110f8c Type:ContainerDied Data:5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b} Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.342235 2112 scope.go:115] "RemoveContainer" containerID="204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef" Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.343163524Z" level=info msg="Removing container: 204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef" id=cbbbff6a-765b-4053-b9e1-77da7ff1127a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:24:19 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod5acce570_9f3b_4dab_9fed_169a4c110f8c.slice. Feb 23 17:24:19 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod5acce570_9f3b_4dab_9fed_169a4c110f8c.slice: Consumed 228ms CPU time Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.367370 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-network-diagnostics/network-check-target-b2mxx] Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.370598 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-network-diagnostics/network-check-target-b2mxx] Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.377110633Z" level=info msg="Removed container 204a046b518e2f02bc69f9a46fbff30b4be58969f38e2794165cb52982c844ef: openshift-network-diagnostics/network-check-target-b2mxx/network-check-target-container" id=cbbbff6a-765b-4053-b9e1-77da7ff1127a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.396960 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-network-diagnostics/network-check-target-52ltr] Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.397003 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:24:19.397079 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fb8cddd7-8398-4edb-b1cc-362df7469281" containerName="ovnkube-upgrades-prepuller" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.397091 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8cddd7-8398-4edb-b1cc-362df7469281" containerName="ovnkube-upgrades-prepuller" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:24:19.397103 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5acce570-9f3b-4dab-9fed-169a4c110f8c" containerName="network-check-target-container" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.397113 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="5acce570-9f3b-4dab-9fed-169a4c110f8c" containerName="network-check-target-container" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.397163 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="fb8cddd7-8398-4edb-b1cc-362df7469281" containerName="ovnkube-upgrades-prepuller" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.397175 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="5acce570-9f3b-4dab-9fed-169a4c110f8c" containerName="network-check-target-container" Feb 23 17:24:19 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podadcfa5f5_1c6b_415e_8e69_b72e137820e1.slice. Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.413990 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-network-diagnostics/network-check-target-52ltr] Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.496981 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf689\" (UniqueName: \"kubernetes.io/projected/adcfa5f5-1c6b-415e-8e69-b72e137820e1-kube-api-access-kf689\") pod \"network-check-target-52ltr\" (UID: \"adcfa5f5-1c6b-415e-8e69-b72e137820e1\") " pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.597468 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-kf689\" (UniqueName: \"kubernetes.io/projected/adcfa5f5-1c6b-415e-8e69-b72e137820e1-kube-api-access-kf689\") pod \"network-check-target-52ltr\" (UID: \"adcfa5f5-1c6b-415e-8e69-b72e137820e1\") " pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.614306 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf689\" (UniqueName: \"kubernetes.io/projected/adcfa5f5-1c6b-415e-8e69-b72e137820e1-kube-api-access-kf689\") pod \"network-check-target-52ltr\" (UID: \"adcfa5f5-1c6b-415e-8e69-b72e137820e1\") " pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.712250 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.712790485Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=90b0a16e-c35c-46a0-8a70-f5835bcbe94b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.712842196Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.732265271Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.732294207Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:24:19 ip-10-0-136-68 systemd-udevd[70191]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:24:19 ip-10-0-136-68 systemd-udevd[70191]: Could not generate persistent MAC address for aa2f6c1cfe2015e: No such file or directory Feb 23 17:24:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): aa2f6c1cfe2015e: link is not ready Feb 23 17:24:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:24:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:24:19 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): aa2f6c1cfe2015e: link becomes ready Feb 23 17:24:19 ip-10-0-136-68 NetworkManager[1147]: [1677173059.8932] device (aa2f6c1cfe2015e): carrier: link connected Feb 23 17:24:19 ip-10-0-136-68 NetworkManager[1147]: [1677173059.8935] manager: (aa2f6c1cfe2015e): new Veth device (/org/freedesktop/NetworkManager/Devices/76) Feb 23 17:24:19 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00393|bridge|INFO|bridge br-int: added interface aa2f6c1cfe2015e on port 33 Feb 23 17:24:19 ip-10-0-136-68 NetworkManager[1147]: [1677173059.9198] manager: (aa2f6c1cfe2015e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77) Feb 23 17:24:19 ip-10-0-136-68 kernel: device aa2f6c1cfe2015e entered promiscuous mode Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:19.984844 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-network-diagnostics/network-check-target-52ltr] Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: I0223 17:24:19.869019 70174 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: 2023-02-23T17:24:19Z [verbose] Add: openshift-network-diagnostics:network-check-target-52ltr:adcfa5f5-1c6b-415e-8e69-b72e137820e1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"aa2f6c1cfe2015e","mac":"de:f0:a9:5d:9e:49"},{"name":"eth0","mac":"0a:58:0a:81:02:23","sandbox":"/var/run/netns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.35/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: I0223 17:24:19.965582 70159 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-52ltr", UID:"adcfa5f5-1c6b-415e-8e69-b72e137820e1", APIVersion:"v1", ResourceVersion:"77317", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.35/23] from ovn-kubernetes Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.986177185Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.986312669Z" level=info msg="Checking pod openshift-network-diagnostics_network-check-target-52ltr for CNI network multus-cni-network (type=multus)" Feb 23 17:24:19 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:24:19.988725 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadcfa5f5_1c6b_415e_8e69_b72e137820e1.slice/crio-aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1.scope WatchSource:0}: Error finding container aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1: Status 404 returned error can't find the container with id aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.990215748Z" level=info msg="Ran pod sandbox aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 with infra container: openshift-network-diagnostics/network-check-target-52ltr/POD" id=90b0a16e-c35c-46a0-8a70-f5835bcbe94b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.991010647Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d" id=db956ac8-7fff-4496-9e09-3ccc7dd40046 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.991179215Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3ce3915354bf7a03f692da38f95e1d2a6d3074e488a32c3b60aaee01fadc2994,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d],Size_:516222840,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=db956ac8-7fff-4496-9e09-3ccc7dd40046 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.991728395Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d" id=21cfa301-181f-4dfa-b30a-3a1400fed18b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.991875143Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3ce3915354bf7a03f692da38f95e1d2a6d3074e488a32c3b60aaee01fadc2994,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:994b8d0cadd7c1d9675d4941a7e46e070aaeaa00b937a25905957c869e7bd22d],Size_:516222840,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=21cfa301-181f-4dfa-b30a-3a1400fed18b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.992404965Z" level=info msg="Creating container: openshift-network-diagnostics/network-check-target-52ltr/network-check-target-container" id=26ed7b8e-07a2-44a4-9738-8810b6201f25 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:24:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:19.992485577Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:24:20 ip-10-0-136-68 systemd[1]: Started crio-conmon-a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce.scope. Feb 23 17:24:20 ip-10-0-136-68 systemd[1]: Started libcontainer container a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce. Feb 23 17:24:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:20.121209 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5acce570-9f3b-4dab-9fed-169a4c110f8c path="/var/lib/kubelet/pods/5acce570-9f3b-4dab-9fed-169a4c110f8c/volumes" Feb 23 17:24:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:20.133970728Z" level=info msg="Created container a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce: openshift-network-diagnostics/network-check-target-52ltr/network-check-target-container" id=26ed7b8e-07a2-44a4-9738-8810b6201f25 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:24:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:20.134249892Z" level=info msg="Starting container: a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce" id=30121998-02f4-4e26-b71a-b529934e97bd name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:24:20 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:20.141554603Z" level=info msg="Started container" PID=70222 containerID=a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce description=openshift-network-diagnostics/network-check-target-52ltr/network-check-target-container id=30121998-02f4-4e26-b71a-b529934e97bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 Feb 23 17:24:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:20.345740 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-52ltr" event=&{ID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 Type:ContainerStarted Data:a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce} Feb 23 17:24:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:20.345767 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-52ltr" event=&{ID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 Type:ContainerStarted Data:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1} Feb 23 17:24:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:20.346288 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:24:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00394|connmgr|INFO|br-ex<->unix#1442: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:24:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00395|connmgr|INFO|br-ex<->unix#1450: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:24:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00396|connmgr|INFO|br-ex<->unix#1455: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:24:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:52.368679668Z" level=info msg="Stopping pod sandbox: 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b" id=9ad2c96b-e038-4169-9e3f-24563c37acae name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:24:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:52.368730037Z" level=info msg="Stopped pod sandbox (already stopped): 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b" id=9ad2c96b-e038-4169-9e3f-24563c37acae name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:24:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:52.368926504Z" level=info msg="Removing pod sandbox: 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b" id=62c4ee61-533d-4a1d-a515-8495a3cb4aaf name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:24:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:52.377011314Z" level=info msg="Removed pod sandbox: 5244fda541f5dc96d55eef69f3d234dc8c9aca31b74ca58c7fc4d50105b9467b" id=62c4ee61-533d-4a1d-a515-8495a3cb4aaf name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:24:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:52.377253926Z" level=info msg="Stopping pod sandbox: 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d" id=4a78f85e-c08a-4db2-9656-b59d6971d619 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:24:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:52.377283221Z" level=info msg="Stopped pod sandbox (already stopped): 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d" id=4a78f85e-c08a-4db2-9656-b59d6971d619 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:24:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:52.377456139Z" level=info msg="Removing pod sandbox: 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d" id=773d1ae3-e6ad-49ba-bf7f-45ed98a7f02f name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:24:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:24:52.385373553Z" level=info msg="Removed pod sandbox: 8102b7a8644eb2b5fd95851f54f3da8b6162290188d768ae49fcf169d853631d" id=773d1ae3-e6ad-49ba-bf7f-45ed98a7f02f name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:24:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:24:52.386395 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9\": container with ID starting with 01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9 not found: ID does not exist" containerID="01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9" Feb 23 17:24:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:52.386430 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9" err="rpc error: code = NotFound desc = could not find container \"01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9\": container with ID starting with 01dde28329bb6eb1962baf04ecf2191809aca8308fa489e787809cca31606ca9 not found: ID does not exist" Feb 23 17:24:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00397|connmgr|INFO|br-int<->unix#2: 78 flow_mods in the 1 s starting 38 s ago (39 adds, 39 deletes) Feb 23 17:24:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:24:59.714452 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:25:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00398|connmgr|INFO|br-ex<->unix#1463: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:25:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00399|connmgr|INFO|br-ex<->unix#1468: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:25:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:27.866564 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ovn-kubernetes/ovnkube-node-qc5bl] Feb 23 17:25:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:27.866789 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" podUID=409b8d00-553f-43cb-8805-64a5931be933 containerName="ovn-controller" containerID="cri-o://893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b" gracePeriod=30 Feb 23 17:25:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:27.866903 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" podUID=409b8d00-553f-43cb-8805-64a5931be933 containerName="kube-rbac-proxy" containerID="cri-o://db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542" gracePeriod=30 Feb 23 17:25:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:27.866949 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" podUID=409b8d00-553f-43cb-8805-64a5931be933 containerName="ovn-acl-logging" containerID="cri-o://99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be" gracePeriod=30 Feb 23 17:25:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:27.866796 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" podUID=409b8d00-553f-43cb-8805-64a5931be933 containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1" gracePeriod=30 Feb 23 17:25:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:27.867429270Z" level=info msg="Stopping container: db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542 (timeout: 30s)" id=7e505f9d-b18d-47dd-bc47-c2acf07853d9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:27.867471735Z" level=info msg="Stopping container: 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be (timeout: 30s)" id=692bc5dc-b9d6-42c1-bf88-c191030098ac name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:27.867501161Z" level=info msg="Stopping container: 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1 (timeout: 30s)" id=15dd2376-40b3-4eff-bed6-1e8672fb1a22 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:27.867485331Z" level=info msg="Stopping container: 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b (timeout: 30s)" id=43abcd34-2f1f-4389-a154-a22b9c9d5fa7 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:27 ip-10-0-136-68 conmon[2431]: conmon 99bba49d493087c09db5 : container 2450 exited with status 143 Feb 23 17:25:27 ip-10-0-136-68 conmon[2431]: conmon 99bba49d493087c09db5 : stdio_input read failed Resource temporarily unavailable Feb 23 17:25:27 ip-10-0-136-68 conmon[2431]: conmon 99bba49d493087c09db5 : stdio_input read failed Resource temporarily unavailable Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-conmon-99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be.scope: Succeeded. Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-conmon-99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be.scope: Consumed 24ms CPU time Feb 23 17:25:27 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00400|connmgr|INFO|br-int<->unix#2: 5 flow_mods 28 s ago (3 adds, 2 deletes) Feb 23 17:25:27 ip-10-0-136-68 conmon[2253]: conmon 893866b40cc3c17e71be : container 2301 exited with status 143 Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-conmon-893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b.scope: Succeeded. Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-conmon-893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b.scope: Consumed 36ms CPU time Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b.scope: Succeeded. Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b.scope: Consumed 2.725s CPU time Feb 23 17:25:27 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:27.911967 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" podUID=409b8d00-553f-43cb-8805-64a5931be933 containerName="ovnkube-node" containerID="cri-o://97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923" gracePeriod=30 Feb 23 17:25:27 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:27.912158778Z" level=info msg="Stopping container: 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923 (timeout: 30s)" id=ae74dd3c-4374-4d88-9360-cf1b627745a8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923.scope: Succeeded. Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923.scope: Consumed 45.587s CPU time Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-conmon-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923.scope: Succeeded. Feb 23 17:25:27 ip-10-0-136-68 systemd[1]: crio-conmon-97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923.scope: Consumed 49ms CPU time Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be.scope: Succeeded. Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be.scope: Consumed 797ms CPU time Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0256284f0609342bbcd63ef77c3aede0be95de24b74edd32c84aec52ed15ae23-merged.mount: Succeeded. Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0256284f0609342bbcd63ef77c3aede0be95de24b74edd32c84aec52ed15ae23-merged.mount: Consumed 0 CPU time Feb 23 17:25:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:28.046868447Z" level=info msg="Stopped container 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-acl-logging" id=692bc5dc-b9d6-42c1-bf88-c191030098ac name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-71eda5337bd3ef8d477b68b663064bb6fd9b72798e2404c526524054622a7239-merged.mount: Succeeded. Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-71eda5337bd3ef8d477b68b663064bb6fd9b72798e2404c526524054622a7239-merged.mount: Consumed 0 CPU time Feb 23 17:25:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:28.064617912Z" level=info msg="Stopped container 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-controller" id=43abcd34-2f1f-4389-a154-a22b9c9d5fa7 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9b3e1289601ed4dd2fe1f96496ee594a0797437b22dca1a20143771eb9b0fd4d-merged.mount: Succeeded. Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-9b3e1289601ed4dd2fe1f96496ee594a0797437b22dca1a20143771eb9b0fd4d-merged.mount: Consumed 0 CPU time Feb 23 17:25:28 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:28.086752054Z" level=info msg="Stopped container 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovnkube-node" id=ae74dd3c-4374-4d88-9360-cf1b627745a8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:28.480864 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc5bl_409b8d00-553f-43cb-8805-64a5931be933/ovn-acl-logging/1.log" Feb 23 17:25:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:28.481245 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc5bl_409b8d00-553f-43cb-8805-64a5931be933/ovn-controller/1.log" Feb 23 17:25:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:28.481289 2112 generic.go:296] "Generic (PLEG): container finished" podID=409b8d00-553f-43cb-8805-64a5931be933 containerID="97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923" exitCode=0 Feb 23 17:25:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:28.481300 2112 generic.go:296] "Generic (PLEG): container finished" podID=409b8d00-553f-43cb-8805-64a5931be933 containerID="99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be" exitCode=143 Feb 23 17:25:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:28.481308 2112 generic.go:296] "Generic (PLEG): container finished" podID=409b8d00-553f-43cb-8805-64a5931be933 containerID="893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b" exitCode=143 Feb 23 17:25:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:28.481328 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerDied Data:97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923} Feb 23 17:25:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:28.481349 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerDied Data:99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be} Feb 23 17:25:28 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:28.481358 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerDied Data:893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b} Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-conmon-24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1.scope: Succeeded. Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-conmon-24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1.scope: Consumed 27ms CPU time Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1.scope: Succeeded. Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1.scope: Consumed 438ms CPU time Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-conmon-db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542.scope: Succeeded. Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-conmon-db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542.scope: Consumed 26ms CPU time Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542.scope: Succeeded. Feb 23 17:25:28 ip-10-0-136-68 systemd[1]: crio-db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542.scope: Consumed 421ms CPU time Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f9a0e2940b0c3d333ebe53ebd1bed9b582f29d93de3b1e290032f94390b6ec81-merged.mount: Succeeded. Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f9a0e2940b0c3d333ebe53ebd1bed9b582f29d93de3b1e290032f94390b6ec81-merged.mount: Consumed 0 CPU time Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.050865721Z" level=info msg="Stopped container db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy" id=7e505f9d-b18d-47dd-bc47-c2acf07853d9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f17fe08fa0dee7d6c58125d0e8a76eb1bb84a2efb344cd6669777e14b62b58b8-merged.mount: Succeeded. Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f17fe08fa0dee7d6c58125d0e8a76eb1bb84a2efb344cd6669777e14b62b58b8-merged.mount: Consumed 0 CPU time Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.080770827Z" level=info msg="Stopped container 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy-ovn-metrics" id=15dd2376-40b3-4eff-bed6-1e8672fb1a22 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.081102389Z" level=info msg="Stopping pod sandbox: 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586" id=6417412c-52d6-4def-928c-e3571bc1a1b6 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-731606ad73ccbe535edb089e86072c2adad9411169e33c0d023bfb8cdfc3ef61-merged.mount: Succeeded. Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-731606ad73ccbe535edb089e86072c2adad9411169e33c0d023bfb8cdfc3ef61-merged.mount: Consumed 0 CPU time Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: run-utsns-c92dd6cc\x2d1f02\x2d41de\x2dbebd\x2d67711ead5b4e.mount: Succeeded. Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: run-utsns-c92dd6cc\x2d1f02\x2d41de\x2dbebd\x2d67711ead5b4e.mount: Consumed 0 CPU time Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: run-ipcns-c92dd6cc\x2d1f02\x2d41de\x2dbebd\x2d67711ead5b4e.mount: Succeeded. Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: run-ipcns-c92dd6cc\x2d1f02\x2d41de\x2dbebd\x2d67711ead5b4e.mount: Consumed 0 CPU time Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.172731023Z" level=info msg="Stopped pod sandbox: 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586" id=6417412c-52d6-4def-928c-e3571bc1a1b6 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.179123 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc5bl_409b8d00-553f-43cb-8805-64a5931be933/ovn-acl-logging/1.log" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.179496 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc5bl_409b8d00-553f-43cb-8805-64a5931be933/ovn-controller/1.log" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343731 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343767 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343791 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343816 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343820 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343830 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash" (OuterVolumeSpecName: "host-slash") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343840 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343854 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343858 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log" (OuterVolumeSpecName: "node-log") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343874 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343874 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343890 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343900 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343913 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343919 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344000 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344030 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344047 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.343927 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344063 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344093 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344120 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344148 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344177 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:25:29.344184 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~configmap/ovnkube-config: clearQuota called, but quotas disabled Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344206 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344236 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344264 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9xlt\" (UniqueName: \"kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344289 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch\") pod \"409b8d00-553f-43cb-8805-64a5931be933\" (UID: \"409b8d00-553f-43cb-8805-64a5931be933\") " Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344388 2112 reconciler.go:399] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-netd\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344405 2112 reconciler.go:399] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-slash\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344420 2112 reconciler.go:399] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-node-log\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344421 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344437 2112 reconciler.go:399] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-ovn-kubernetes\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344452 2112 reconciler.go:399] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-cni-bin\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344468 2112 reconciler.go:399] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-ovn\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344483 2112 reconciler.go:399] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-var-lib-openvswitch\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344501 2112 reconciler.go:399] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-var-lib-cni-networks-ovn-kubernetes\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344518 2112 reconciler.go:399] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-systemd-units\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344515 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:25:29.344492 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~configmap/env-overrides: clearQuota called, but quotas disabled Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344645 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.344144 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:25:29.345003 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes/kubernetes.io~configmap/ovn-ca: clearQuota called, but quotas disabled Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.345241 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca" (OuterVolumeSpecName: "ovn-ca") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "ovn-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.345270 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.345293 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket" (OuterVolumeSpecName: "log-socket") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.359939 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.361837 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt" (OuterVolumeSpecName: "kube-api-access-k9xlt") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "kube-api-access-k9xlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.364808 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert" (OuterVolumeSpecName: "ovn-cert") pod "409b8d00-553f-43cb-8805-64a5931be933" (UID: "409b8d00-553f-43cb-8805-64a5931be933"). InnerVolumeSpecName "ovn-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445051 2112 reconciler.go:399] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-node-metrics-cert\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445075 2112 reconciler.go:399] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-host-run-netns\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445084 2112 reconciler.go:399] "Volume detached for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/409b8d00-553f-43cb-8805-64a5931be933-ovn-cert\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445094 2112 reconciler.go:399] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-run-openvswitch\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445103 2112 reconciler.go:399] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-env-overrides\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445112 2112 reconciler.go:399] "Volume detached for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovn-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445121 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-k9xlt\" (UniqueName: \"kubernetes.io/projected/409b8d00-553f-43cb-8805-64a5931be933-kube-api-access-k9xlt\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445130 2112 reconciler.go:399] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-etc-openvswitch\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445138 2112 reconciler.go:399] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/409b8d00-553f-43cb-8805-64a5931be933-log-socket\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.445146 2112 reconciler.go:399] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/409b8d00-553f-43cb-8805-64a5931be933-ovnkube-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.484713 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc5bl_409b8d00-553f-43cb-8805-64a5931be933/ovn-acl-logging/1.log" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.485075 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc5bl_409b8d00-553f-43cb-8805-64a5931be933/ovn-controller/1.log" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.485114 2112 generic.go:296] "Generic (PLEG): container finished" podID=409b8d00-553f-43cb-8805-64a5931be933 containerID="24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1" exitCode=0 Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.485129 2112 generic.go:296] "Generic (PLEG): container finished" podID=409b8d00-553f-43cb-8805-64a5931be933 containerID="db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542" exitCode=0 Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.485156 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerDied Data:24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1} Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.485184 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerDied Data:db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542} Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.485195 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc5bl" event=&{ID:409b8d00-553f-43cb-8805-64a5931be933 Type:ContainerDied Data:15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586} Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.485208 2112 scope.go:115] "RemoveContainer" containerID="97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923" Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.486183745Z" level=info msg="Removing container: 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923" id=eeb1fc09-06d2-4c6d-b1b7-a858b59cb306 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod409b8d00_553f_43cb_8805_64a5931be933.slice. Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod409b8d00_553f_43cb_8805_64a5931be933.slice: Consumed 50.134s CPU time Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.509630794Z" level=info msg="Removed container 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovnkube-node" id=eeb1fc09-06d2-4c6d-b1b7-a858b59cb306 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.509873 2112 scope.go:115] "RemoveContainer" containerID="24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1" Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.511017184Z" level=info msg="Removing container: 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1" id=2f0b0abe-cf7b-4495-9c4a-d06f1aa313f4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.512137 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ovn-kubernetes/ovnkube-node-qc5bl] Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.519281 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-ovn-kubernetes/ovnkube-node-qc5bl] Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.528982833Z" level=info msg="Removed container 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy-ovn-metrics" id=2f0b0abe-cf7b-4495-9c4a-d06f1aa313f4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.529121 2112 scope.go:115] "RemoveContainer" containerID="db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542" Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.529692357Z" level=info msg="Removing container: db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542" id=7b925d6d-2a7c-4d9e-b167-e2e988cf7e1b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.545470025Z" level=info msg="Removed container db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542: openshift-ovn-kubernetes/ovnkube-node-qc5bl/kube-rbac-proxy" id=7b925d6d-2a7c-4d9e-b167-e2e988cf7e1b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.545597 2112 scope.go:115] "RemoveContainer" containerID="99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be" Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.546253291Z" level=info msg="Removing container: 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be" id=bfb3e6e0-222e-49f0-97d2-b420b5b6a2b2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.561081320Z" level=info msg="Removed container 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-acl-logging" id=bfb3e6e0-222e-49f0-97d2-b420b5b6a2b2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.561209 2112 scope.go:115] "RemoveContainer" containerID="893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b" Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.561865336Z" level=info msg="Removing container: 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b" id=cbb847cf-cbde-4f78-873e-66511a5d72b2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577011 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-ovn-kubernetes/ovnkube-node-gzbrl] Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577054 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.577119 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="kube-rbac-proxy" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577131 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="kube-rbac-proxy" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.577144 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovn-acl-logging" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577152 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovn-acl-logging" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.577165 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovnkube-node" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577172 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovnkube-node" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.577183 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577192 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.577201 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovn-controller" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577209 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovn-controller" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577262 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovn-controller" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577274 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577284 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="kube-rbac-proxy" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577293 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovnkube-node" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.577302 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="409b8d00-553f-43cb-8805-64a5931be933" containerName="ovn-acl-logging" Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.579510556Z" level=info msg="Removed container 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b: openshift-ovn-kubernetes/ovnkube-node-qc5bl/ovn-controller" id=cbb847cf-cbde-4f78-873e-66511a5d72b2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.579653 2112 scope.go:115] "RemoveContainer" containerID="97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.579930 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923\": container with ID starting with 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923 not found: ID does not exist" containerID="97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.579959 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923} err="failed to get container status \"97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923\": rpc error: code = NotFound desc = could not find container \"97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923\": container with ID starting with 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923 not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.579969 2112 scope.go:115] "RemoveContainer" containerID="24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.580204 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1\": container with ID starting with 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1 not found: ID does not exist" containerID="24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.580257 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1} err="failed to get container status \"24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1\": rpc error: code = NotFound desc = could not find container \"24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1\": container with ID starting with 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1 not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.580269 2112 scope.go:115] "RemoveContainer" containerID="db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.580511 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542\": container with ID starting with db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542 not found: ID does not exist" containerID="db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.580545 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542} err="failed to get container status \"db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542\": rpc error: code = NotFound desc = could not find container \"db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542\": container with ID starting with db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542 not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.580556 2112 scope.go:115] "RemoveContainer" containerID="99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.580969 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be\": container with ID starting with 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be not found: ID does not exist" containerID="99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.581003 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be} err="failed to get container status \"99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be\": rpc error: code = NotFound desc = could not find container \"99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be\": container with ID starting with 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.581015 2112 scope.go:115] "RemoveContainer" containerID="893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:29.581422 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b\": container with ID starting with 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b not found: ID does not exist" containerID="893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.581456 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b} err="failed to get container status \"893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b\": rpc error: code = NotFound desc = could not find container \"893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b\": container with ID starting with 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.581469 2112 scope.go:115] "RemoveContainer" containerID="97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.581800 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923} err="failed to get container status \"97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923\": rpc error: code = NotFound desc = could not find container \"97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923\": container with ID starting with 97aeec3c29bd986b757261e9dc4cde0e62d5449a31914ff2ddb23a813a2f2923 not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.581820 2112 scope.go:115] "RemoveContainer" containerID="24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.582009 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1} err="failed to get container status \"24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1\": rpc error: code = NotFound desc = could not find container \"24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1\": container with ID starting with 24b38c855efe345ede926eabafa17cb35b1055503b9551a485b22cca590c5cc1 not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.582029 2112 scope.go:115] "RemoveContainer" containerID="db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.582316 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542} err="failed to get container status \"db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542\": rpc error: code = NotFound desc = could not find container \"db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542\": container with ID starting with db2e36c169d560d488d04a379443d5c86c4c536baf0cebdd3a36451256ca8542 not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.582335 2112 scope.go:115] "RemoveContainer" containerID="99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.582503 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be} err="failed to get container status \"99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be\": rpc error: code = NotFound desc = could not find container \"99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be\": container with ID starting with 99bba49d493087c09db57c6a3fb9a2977323d726bf65ce6fa2eaaa146def64be not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.582517 2112 scope.go:115] "RemoveContainer" containerID="893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.582832 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b} err="failed to get container status \"893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b\": rpc error: code = NotFound desc = could not find container \"893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b\": container with ID starting with 893866b40cc3c17e71befbe72d3e1a19f46dadde86166f57cad08fdb7de61a6b not found: ID does not exist" Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod7da00340_9715_48ac_b144_4705de276bf5.slice. Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.746886 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-node-log\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.746917 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-log-socket\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.746935 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-ovn\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.746960 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-node-metrics-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747008 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-slash\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747035 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-etc-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747158 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747204 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-bin\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747235 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9564\" (UniqueName: \"kubernetes.io/projected/7da00340-9715-48ac-b144-4705de276bf5-kube-api-access-p9564\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747267 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-env-overrides\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747305 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747348 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-netns\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747369 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovnkube-config\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747386 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-systemd-units\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747404 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747428 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747472 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovn-ca\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747570 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-netd\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.747592 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-var-lib-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848252 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-node-log\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848281 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-log-socket\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848299 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-ovn\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848317 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-node-metrics-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848343 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-slash\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848371 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-etc-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848377 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-node-log\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848382 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-log-socket\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848402 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848433 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-p9564\" (UniqueName: \"kubernetes.io/projected/7da00340-9715-48ac-b144-4705de276bf5-kube-api-access-p9564\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848436 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-ovn\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848462 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-bin\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848478 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-etc-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848487 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-env-overrides\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848517 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848543 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovnkube-config\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848570 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-netns\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848599 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-systemd-units\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848627 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848673 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848700 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovn-ca\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848734 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-netd\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848761 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-var-lib-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848830 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-var-lib-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848837 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848868 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-bin\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848884 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-netns\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848902 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-systemd-units\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.849685 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-env-overrides\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.849750 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.848517 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-slash\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.849825 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovn-ca\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.849884 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.849929 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-netd\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.850162 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovnkube-config\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.850846 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-node-metrics-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.851233 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.865973 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9564\" (UniqueName: \"kubernetes.io/projected/7da00340-9715-48ac-b144-4705de276bf5-kube-api-access-p9564\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:29.890695 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.891098957Z" level=info msg="Running pod sandbox: openshift-ovn-kubernetes/ovnkube-node-gzbrl/POD" id=39d14149-b6a5-4906-b931-a091745cbd69 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.891154107Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.907484329Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=39d14149-b6a5-4906-b931-a091745cbd69 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:25:29 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:25:29.910017 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7da00340_9715_48ac_b144_4705de276bf5.slice/crio-569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450.scope WatchSource:0}: Error finding container 569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450: Status 404 returned error can't find the container with id 569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.911705500Z" level=info msg="Ran pod sandbox 569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 with infra container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/POD" id=39d14149-b6a5-4906-b931-a091745cbd69 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.912357198Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=70c64d52-6314-4ab2-9262-2d7818211996 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.912523492Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=70c64d52-6314-4ab2-9262-2d7818211996 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.912996289Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=20fff99e-6db2-477e-9cfb-5a9f35afd757 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.913136869Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=20fff99e-6db2-477e-9cfb-5a9f35afd757 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.913789367Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller" id=7dc52018-6fa1-4ab7-9a1d-20003d474ea4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.913875694Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: Started crio-conmon-353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420.scope. Feb 23 17:25:29 ip-10-0-136-68 systemd[1]: Started libcontainer container 353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420. Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.981311915Z" level=info msg="Created container 353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller" id=7dc52018-6fa1-4ab7-9a1d-20003d474ea4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.981696410Z" level=info msg="Starting container: 353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420" id=69c63414-594a-4af2-9837-b64defa4f412 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.988447215Z" level=info msg="Started container" PID=71359 containerID=353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420 description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller id=69c63414-594a-4af2-9837-b64defa4f412 name=/runtime.v1.RuntimeService/StartContainer sandboxID=569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.996529471Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=b694ec03-f908-469c-b2e8-bfc0f1d23084 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.996714974Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=b694ec03-f908-469c-b2e8-bfc0f1d23084 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.998242381Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=deb3904d-2132-4f45-8251-3e17451b0807 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.998385701Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=deb3904d-2132-4f45-8251-3e17451b0807 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.999108518Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-acl-logging" id=18eb5685-fb8b-42d6-a616-09ced9ffd0ac name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:29.999201025Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: Started crio-conmon-a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179.scope. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: run-netns-c92dd6cc\x2d1f02\x2d41de\x2dbebd\x2d67711ead5b4e.mount: Succeeded. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: run-netns-c92dd6cc\x2d1f02\x2d41de\x2dbebd\x2d67711ead5b4e.mount: Consumed 0 CPU time Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586-userdata-shm.mount: Succeeded. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9xlt.mount: Succeeded. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9xlt.mount: Consumed 0 CPU time Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7esecret-ovn\x2dnode\x2dmetrics\x2dcert.mount: Succeeded. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7esecret-ovn\x2dnode\x2dmetrics\x2dcert.mount: Consumed 0 CPU time Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7esecret-ovn\x2dcert.mount: Succeeded. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-409b8d00\x2d553f\x2d43cb\x2d8805\x2d64a5931be933-volumes-kubernetes.io\x7esecret-ovn\x2dcert.mount: Consumed 0 CPU time Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: Started libcontainer container a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179. Feb 23 17:25:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:30.120623 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=409b8d00-553f-43cb-8805-64a5931be933 path="/var/lib/kubelet/pods/409b8d00-553f-43cb-8805-64a5931be933/volumes" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.143904200Z" level=info msg="Created container a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-acl-logging" id=18eb5685-fb8b-42d6-a616-09ced9ffd0ac name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.144336120Z" level=info msg="Starting container: a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179" id=73f62f5e-87d8-4903-9c55-9daac8e77719 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.151195283Z" level=info msg="Started container" PID=71401 containerID=a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179 description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-acl-logging id=73f62f5e-87d8-4903-9c55-9daac8e77719 name=/runtime.v1.RuntimeService/StartContainer sandboxID=569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.160970071Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=c4bab032-db88-4055-bd8d-b27602e27a48 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.161148311Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c4bab032-db88-4055-bd8d-b27602e27a48 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.161744416Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=0b3a91d1-7fa3-446e-982e-eca09379813d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.161903272Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0b3a91d1-7fa3-446e-982e-eca09379813d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.162560425Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy" id=f8d35156-e963-4a77-95ee-fe22f037887b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.162649721Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: Started crio-conmon-435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072.scope. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: Started libcontainer container 435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072. Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.275344797Z" level=info msg="Created container 435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072: openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy" id=f8d35156-e963-4a77-95ee-fe22f037887b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.275780167Z" level=info msg="Starting container: 435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072" id=0153399c-9492-46aa-a119-d3ea7645fc9e name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.282440505Z" level=info msg="Started container" PID=71447 containerID=435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072 description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy id=0153399c-9492-46aa-a119-d3ea7645fc9e name=/runtime.v1.RuntimeService/StartContainer sandboxID=569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.290303268Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=96ea07e6-c8f9-4c35-8242-9f0244359b73 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.290470492Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=96ea07e6-c8f9-4c35-8242-9f0244359b73 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.291040934Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=bf16ba01-2dfb-4999-bdfd-66bee4b43cf2 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.291219585Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=bf16ba01-2dfb-4999-bdfd-66bee4b43cf2 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.292085997Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy-ovn-metrics" id=53285f4a-9c6a-4465-9f94-306c72f2a524 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.292187408Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: Started crio-conmon-ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609.scope. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: Started libcontainer container ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609. Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.409699465Z" level=info msg="Created container ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609: openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy-ovn-metrics" id=53285f4a-9c6a-4465-9f94-306c72f2a524 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.410069648Z" level=info msg="Starting container: ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609" id=158ac6f1-3ae2-4218-90de-a10ec64c9c30 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.416592932Z" level=info msg="Started container" PID=71494 containerID=ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609 description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy-ovn-metrics id=158ac6f1-3ae2-4218-90de-a10ec64c9c30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.424260660Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=0dfe4db8-5aad-4856-b8de-c0f4643cf0c1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.424448297Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=0dfe4db8-5aad-4856-b8de-c0f4643cf0c1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.425047347Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=feb68fae-36cc-4dce-84a1-ab1eb43b0a9a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.425206317Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=feb68fae-36cc-4dce-84a1-ab1eb43b0a9a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.426261963Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovnkube-node" id=6f1535a2-df32-41b8-b717-6cd562bf5864 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.426364002Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: Started crio-conmon-9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537.scope. Feb 23 17:25:30 ip-10-0-136-68 systemd[1]: Started libcontainer container 9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537. Feb 23 17:25:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:30.488721 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609} Feb 23 17:25:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:30.488756 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072} Feb 23 17:25:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:30.488792 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179} Feb 23 17:25:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:30.488808 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420} Feb 23 17:25:30 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:30.488822 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450} Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.508284376Z" level=info msg="Created container 9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovnkube-node" id=6f1535a2-df32-41b8-b717-6cd562bf5864 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.508585342Z" level=info msg="Starting container: 9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537" id=2e9acbbc-bb66-4457-845c-cead09aab2ab name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.515429170Z" level=info msg="Started container" PID=71544 containerID=9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537 description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovnkube-node id=2e9acbbc-bb66-4457-845c-cead09aab2ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.524282813Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.534191980Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.534211358Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.534222356Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.542790237Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.542821219Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.542853457Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.554894346Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.554920996Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.554936444Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.563260586Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.563279323Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.563288956Z" level=info msg="CNI monitoring event WRITE \"/var/lib/cni/bin/ovn-k8s-cni-overlay\"" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.570717200Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:25:30 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:30.570736766Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:25:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00401|connmgr|INFO|br-int<->unix#1475: 2277 flow_mods 1 s ago (2262 adds, 1 deletes, 14 modifications) Feb 23 17:25:31 ip-10-0-136-68 systemd[1]: crio-353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420.scope: Succeeded. Feb 23 17:25:31 ip-10-0-136-68 systemd[1]: crio-353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420.scope: Consumed 148ms CPU time Feb 23 17:25:31 ip-10-0-136-68 systemd[1]: crio-conmon-353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420.scope: Succeeded. Feb 23 17:25:31 ip-10-0-136-68 systemd[1]: crio-conmon-353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420.scope: Consumed 23ms CPU time Feb 23 17:25:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:31.492946 2112 generic.go:296] "Generic (PLEG): container finished" podID=7da00340-9715-48ac-b144-4705de276bf5 containerID="353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420" exitCode=0 Feb 23 17:25:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:31.492988 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerDied Data:353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420} Feb 23 17:25:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:31.493014 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537} Feb 23 17:25:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:31.493469 2112 scope.go:115] "RemoveContainer" containerID="353206216d2c9f3023dd11c197d0fa71c1d7848c22859c8729843b0358855420" Feb 23 17:25:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:31.494834 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.495175416Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=7d651432-4fd6-4c0d-b321-44ec5eae8bdb name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.495380440Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=7d651432-4fd6-4c0d-b321-44ec5eae8bdb name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.496489954Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=6d24f5bc-8d84-470f-9424-71047feb3551 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.496808007Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=6d24f5bc-8d84-470f-9424-71047feb3551 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.497743313Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller" id=ace5babb-a39f-461b-9826-7143f48f9131 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.497874691Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:25:31 ip-10-0-136-68 systemd[1]: Started crio-conmon-b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557.scope. Feb 23 17:25:31 ip-10-0-136-68 systemd[1]: Started libcontainer container b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557. Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.600510255Z" level=info msg="Created container b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller" id=ace5babb-a39f-461b-9826-7143f48f9131 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.600970038Z" level=info msg="Starting container: b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557" id=be0204cc-7607-4176-a8c4-44c6fbf0bbe0 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:25:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:31.609162828Z" level=info msg="Started container" PID=71818 containerID=b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557 description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller id=be0204cc-7607-4176-a8c4-44c6fbf0bbe0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 Feb 23 17:25:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00402|connmgr|INFO|br-ex<->unix#1482: 4 flow_mods in the last 0 s (4 adds) Feb 23 17:25:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:32.496514 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557} Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.759013079Z" level=info msg="CNI monitoring event REMOVE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.769553719Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.769579014Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.769593772Z" level=info msg="CNI monitoring event CREATE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.777874572Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.777899285Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.777913117Z" level=info msg="CNI monitoring event WRITE \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.785476845Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.785497101Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:25:38 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:38.785510385Z" level=info msg="CNI monitoring event CHMOD \"/etc/kubernetes/cni/net.d/00-multus.conf\"" Feb 23 17:25:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00403|connmgr|INFO|br-int<->unix#1484: 2277 flow_mods 10 s ago (2262 adds, 1 deletes, 14 modifications) Feb 23 17:25:46 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00404|connmgr|INFO|br-ex<->unix#1488: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:25:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:52.389282502Z" level=info msg="Stopping pod sandbox: 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586" id=06439d0f-95ef-4832-b8a4-7097f86ca6f3 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:25:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:52.389323176Z" level=info msg="Stopped pod sandbox (already stopped): 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586" id=06439d0f-95ef-4832-b8a4-7097f86ca6f3 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:25:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:52.389535078Z" level=info msg="Removing pod sandbox: 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586" id=b671a035-7a56-464b-846a-4373e0b86d47 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:25:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:25:52.398617896Z" level=info msg="Removed pod sandbox: 15c8174dfae0ee71062752687c2084097268b2d58484a203c43cd1452f6b8586" id=b671a035-7a56-464b-846a-4373e0b86d47 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:52.399950 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b\": container with ID starting with c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b not found: ID does not exist" containerID="c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:52.399986 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b" err="rpc error: code = NotFound desc = could not find container \"c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b\": container with ID starting with c5f7cc8140aee5330785bd4d197373455bff7c22bbd7f0840b2113148bb0f33b not found: ID does not exist" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:52.400232 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f\": container with ID starting with 6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f not found: ID does not exist" containerID="6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:52.400257 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f" err="rpc error: code = NotFound desc = could not find container \"6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f\": container with ID starting with 6e7a97653d824ec5b01b77eaedebcfa9718e07078a3f1d75a5ed3092cfb1f83f not found: ID does not exist" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:52.400476 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654\": container with ID starting with b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654 not found: ID does not exist" containerID="b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:52.400506 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654" err="rpc error: code = NotFound desc = could not find container \"b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654\": container with ID starting with b509d8436ded4c4e37d521123b7454d90a82f068fb8c15c885117dcd35577654 not found: ID does not exist" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:52.400743 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d\": container with ID starting with 7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d not found: ID does not exist" containerID="7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:52.400763 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d" err="rpc error: code = NotFound desc = could not find container \"7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d\": container with ID starting with 7ed25ccda440b1407317cffea802770d961748c145f541fad8b075aa9818b84d not found: ID does not exist" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:25:52.400972 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3\": container with ID starting with 434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3 not found: ID does not exist" containerID="434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3" Feb 23 17:25:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:52.400992 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3" err="rpc error: code = NotFound desc = could not find container \"434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3\": container with ID starting with 434bc76ecbe42021fc40f1c0739badd563ca7f187e1c106f228683f5b566bff3 not found: ID does not exist" Feb 23 17:25:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:25:59.931208 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:26:01 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00405|connmgr|INFO|br-ex<->unix#1494: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:26:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00406|connmgr|INFO|br-ex<->unix#1501: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:26:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00407|connmgr|INFO|br-ex<->unix#1507: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:26:46 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00408|connmgr|INFO|br-ex<->unix#1514: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:27:01 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00409|connmgr|INFO|br-ex<->unix#1519: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:27:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00410|connmgr|INFO|br-ex<->unix#1527: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:27:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00411|connmgr|INFO|br-ex<->unix#1533: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:27:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00412|connmgr|INFO|br-int<->unix#1484: 96 flow_mods 10 s ago (48 adds, 48 deletes) Feb 23 17:27:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:27:46.022483023Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=401d6f17-4036-4851-95ad-ea24067091bc name=/runtime.v1.ImageService/ImageStatus Feb 23 17:27:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:27:46.022746524Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=401d6f17-4036-4851-95ad-ea24067091bc name=/runtime.v1.ImageService/ImageStatus Feb 23 17:27:46 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00413|connmgr|INFO|br-ex<->unix#1540: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:28:01 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00414|connmgr|INFO|br-ex<->unix#1545: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:28:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00415|connmgr|INFO|br-ex<->unix#1553: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:28:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00416|connmgr|INFO|br-ex<->unix#1558: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:28:46 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00417|connmgr|INFO|br-ex<->unix#1566: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:29:01 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00418|connmgr|INFO|br-ex<->unix#1571: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:29:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00419|connmgr|INFO|br-ex<->unix#1579: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:29:22 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00420|connmgr|INFO|br-int<->unix#1484: 983 flow_mods 10 s ago (511 adds, 364 deletes, 108 modifications) Feb 23 17:29:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00421|connmgr|INFO|br-ex<->unix#1584: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:29:46 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00422|connmgr|INFO|br-ex<->unix#1592: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.151616 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd] Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.151672 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.158895 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd] Feb 23 17:30:00 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod020f3e1e_9ac7_42d0_8b15_bf2ed04169bb.slice. Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.328138 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-secret-volume\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.328189 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5ph\" (UniqueName: \"kubernetes.io/projected/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-kube-api-access-nr5ph\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.328311 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-config-volume\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.429327 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-config-volume\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.429378 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-secret-volume\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.429409 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-nr5ph\" (UniqueName: \"kubernetes.io/projected/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-kube-api-access-nr5ph\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.430180 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-config-volume\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.431859 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-secret-volume\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.444731 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr5ph\" (UniqueName: \"kubernetes.io/projected/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-kube-api-access-nr5ph\") pod \"collect-profiles-27952890-rh4pd\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.465367 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.465799337Z" level=info msg="Running pod sandbox: openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd/POD" id=5a1d1f75-c8e8-49df-b6a8-da9380d2552c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.465859440Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.484956939Z" level=info msg="Got pod network &{Name:collect-profiles-27952890-rh4pd Namespace:openshift-operator-lifecycle-manager ID:12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d UID:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb NetNS:/var/run/netns/8a0262d3-942d-4b6f-b838-178f2f287186 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.484985707Z" level=info msg="Adding pod openshift-operator-lifecycle-manager_collect-profiles-27952890-rh4pd to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:30:00 ip-10-0-136-68 systemd-udevd[74413]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:30:00 ip-10-0-136-68 systemd-udevd[74413]: Could not generate persistent MAC address for 12b4663c669677e: No such file or directory Feb 23 17:30:00 ip-10-0-136-68 NetworkManager[1147]: [1677173400.6409] manager: (12b4663c669677e): new Veth device (/org/freedesktop/NetworkManager/Devices/78) Feb 23 17:30:00 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 12b4663c669677e: link is not ready Feb 23 17:30:00 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:30:00 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:30:00 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 12b4663c669677e: link becomes ready Feb 23 17:30:00 ip-10-0-136-68 NetworkManager[1147]: [1677173400.6457] device (12b4663c669677e): carrier: link connected Feb 23 17:30:00 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00423|bridge|INFO|bridge br-int: added interface 12b4663c669677e on port 34 Feb 23 17:30:00 ip-10-0-136-68 NetworkManager[1147]: [1677173400.6664] manager: (12b4663c669677e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79) Feb 23 17:30:00 ip-10-0-136-68 kernel: device 12b4663c669677e entered promiscuous mode Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:00.732033 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd] Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: I0223 17:30:00.619340 74403 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: 2023-02-23T17:30:00Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-27952890-rh4pd:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"12b4663c669677e","mac":"1e:f4:db:e9:5a:1b"},{"name":"eth0","mac":"0a:58:0a:81:02:03","sandbox":"/var/run/netns/8a0262d3-942d-4b6f-b838-178f2f287186"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.3/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: I0223 17:30:00.714607 74396 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-27952890-rh4pd", UID:"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb", APIVersion:"v1", ResourceVersion:"79912", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.3/23] from ovn-kubernetes Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.733553337Z" level=info msg="Got pod network &{Name:collect-profiles-27952890-rh4pd Namespace:openshift-operator-lifecycle-manager ID:12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d UID:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb NetNS:/var/run/netns/8a0262d3-942d-4b6f-b838-178f2f287186 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.733709904Z" level=info msg="Checking pod openshift-operator-lifecycle-manager_collect-profiles-27952890-rh4pd for CNI network multus-cni-network (type=multus)" Feb 23 17:30:00 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:30:00.735931 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod020f3e1e_9ac7_42d0_8b15_bf2ed04169bb.slice/crio-12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d.scope WatchSource:0}: Error finding container 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d: Status 404 returned error can't find the container with id 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.737483165Z" level=info msg="Ran pod sandbox 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d with infra container: openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd/POD" id=5a1d1f75-c8e8-49df-b6a8-da9380d2552c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.738237808Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca" id=c437f76b-1b2a-4cd9-a8f1-4a1a0a9dfaa3 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.738410616Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d18c9ec1cf4fc492de5643229404fefce6842ed44c5d14b27a69b5249995b8fa,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca],Size_:700773392,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c437f76b-1b2a-4cd9-a8f1-4a1a0a9dfaa3 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.738972397Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca" id=5ee9465c-9a65-414e-8767-c25df0b25aa9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.739110570Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d18c9ec1cf4fc492de5643229404fefce6842ed44c5d14b27a69b5249995b8fa,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:283dd1441060b064aca55c9f24217199ffd2729ab7991a4c0bb2edc2489bf4ca],Size_:700773392,Uid:&Int64Value{Value:1001,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=5ee9465c-9a65-414e-8767-c25df0b25aa9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.739716908Z" level=info msg="Creating container: openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd/collect-profiles" id=d0cd58a4-dc86-4d08-8714-de7626a2b498 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.739805149Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:30:00 ip-10-0-136-68 systemd[1]: Started crio-conmon-303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1.scope. Feb 23 17:30:00 ip-10-0-136-68 systemd[1]: Started libcontainer container 303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1. Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.873314045Z" level=info msg="Created container 303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1: openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd/collect-profiles" id=d0cd58a4-dc86-4d08-8714-de7626a2b498 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.873774083Z" level=info msg="Starting container: 303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1" id=2dc60e37-916d-4181-8fac-c7ed51c59490 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:30:00 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:00.881573283Z" level=info msg="Started container" PID=74444 containerID=303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1 description=openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd/collect-profiles id=2dc60e37-916d-4181-8fac-c7ed51c59490 name=/runtime.v1.RuntimeService/StartContainer sandboxID=12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d Feb 23 17:30:01 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:01.042500 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" event=&{ID:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb Type:ContainerStarted Data:303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1} Feb 23 17:30:01 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:01.042538 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" event=&{ID:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb Type:ContainerStarted Data:12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d} Feb 23 17:30:01 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00424|connmgr|INFO|br-ex<->unix#1598: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:30:02 ip-10-0-136-68 systemd[1]: crio-303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1.scope: Succeeded. Feb 23 17:30:02 ip-10-0-136-68 systemd[1]: crio-303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1.scope: Consumed 1.870s CPU time Feb 23 17:30:02 ip-10-0-136-68 systemd[1]: crio-conmon-303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1.scope: Succeeded. Feb 23 17:30:02 ip-10-0-136-68 systemd[1]: crio-conmon-303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1.scope: Consumed 25ms CPU time Feb 23 17:30:03 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:03.047919 2112 generic.go:296] "Generic (PLEG): container finished" podID=020f3e1e-9ac7-42d0-8b15-bf2ed04169bb containerID="303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1" exitCode=0 Feb 23 17:30:03 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:03.047989 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" event=&{ID:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb Type:ContainerDied Data:303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1} Feb 23 17:30:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:04.049779475Z" level=info msg="Stopping pod sandbox: 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d" id=a5230d31-1331-4723-b684-a908d0db7f6c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:30:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:04.050003840Z" level=info msg="Got pod network &{Name:collect-profiles-27952890-rh4pd Namespace:openshift-operator-lifecycle-manager ID:12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d UID:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb NetNS:/var/run/netns/8a0262d3-942d-4b6f-b838-178f2f287186 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:30:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:04.050101908Z" level=info msg="Deleting pod openshift-operator-lifecycle-manager_collect-profiles-27952890-rh4pd from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:30:04 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00425|bridge|INFO|bridge br-int: deleted interface 12b4663c669677e on port 34 Feb 23 17:30:04 ip-10-0-136-68 kernel: device 12b4663c669677e left promiscuous mode Feb 23 17:30:04 ip-10-0-136-68 crio[2062]: 2023-02-23T17:30:04Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-27952890-rh4pd:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:30:04 ip-10-0-136-68 crio[2062]: I0223 17:30:04.186256 74552 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:30:04 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e440f32c2c744fdf0886fbc8d0d6eda638a4cf5e2bd5b2b0537d18c9b4a557e2-merged.mount: Succeeded. Feb 23 17:30:04 ip-10-0-136-68 systemd[1]: run-utsns-8a0262d3\x2d942d\x2d4b6f\x2db838\x2d178f2f287186.mount: Succeeded. Feb 23 17:30:04 ip-10-0-136-68 systemd[1]: run-ipcns-8a0262d3\x2d942d\x2d4b6f\x2db838\x2d178f2f287186.mount: Succeeded. Feb 23 17:30:04 ip-10-0-136-68 systemd[1]: run-netns-8a0262d3\x2d942d\x2d4b6f\x2db838\x2d178f2f287186.mount: Succeeded. Feb 23 17:30:04 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d-userdata-shm.mount: Succeeded. Feb 23 17:30:04 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:30:04.728799492Z" level=info msg="Stopped pod sandbox: 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d" id=a5230d31-1331-4723-b684-a908d0db7f6c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.854427 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-secret-volume\") pod \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.854482 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr5ph\" (UniqueName: \"kubernetes.io/projected/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-kube-api-access-nr5ph\") pod \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.854510 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-config-volume\") pod \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\" (UID: \"020f3e1e-9ac7-42d0-8b15-bf2ed04169bb\") " Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:30:04.854820 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.855028 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-config-volume" (OuterVolumeSpecName: "config-volume") pod "020f3e1e-9ac7-42d0-8b15-bf2ed04169bb" (UID: "020f3e1e-9ac7-42d0-8b15-bf2ed04169bb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.861266 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "020f3e1e-9ac7-42d0-8b15-bf2ed04169bb" (UID: "020f3e1e-9ac7-42d0-8b15-bf2ed04169bb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.861271 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-kube-api-access-nr5ph" (OuterVolumeSpecName: "kube-api-access-nr5ph") pod "020f3e1e-9ac7-42d0-8b15-bf2ed04169bb" (UID: "020f3e1e-9ac7-42d0-8b15-bf2ed04169bb"). InnerVolumeSpecName "kube-api-access-nr5ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.955269 2112 reconciler.go:399] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-secret-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.955304 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-nr5ph\" (UniqueName: \"kubernetes.io/projected/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-kube-api-access-nr5ph\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:30:04 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:04.955318 2112 reconciler.go:399] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb-config-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:30:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:05.052951 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd" event=&{ID:020f3e1e-9ac7-42d0-8b15-bf2ed04169bb Type:ContainerDied Data:12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d} Feb 23 17:30:05 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:30:05.052981 2112 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d" Feb 23 17:30:05 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod020f3e1e_9ac7_42d0_8b15_bf2ed04169bb.slice. Feb 23 17:30:05 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod020f3e1e_9ac7_42d0_8b15_bf2ed04169bb.slice: Consumed 1.895s CPU time Feb 23 17:30:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-020f3e1e\x2d9ac7\x2d42d0\x2d8b15\x2dbf2ed04169bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnr5ph.mount: Succeeded. Feb 23 17:30:05 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-020f3e1e\x2d9ac7\x2d42d0\x2d8b15\x2dbf2ed04169bb-volumes-kubernetes.io\x7esecret-secret\x2dvolume.mount: Succeeded. Feb 23 17:30:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00426|connmgr|INFO|br-ex<->unix#1605: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:30:22 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00427|connmgr|INFO|br-int<->unix#1484: 68 flow_mods in the 4 s starting 22 s ago (34 adds, 34 deletes) Feb 23 17:30:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00428|connmgr|INFO|br-ex<->unix#1611: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:30:46 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00429|connmgr|INFO|br-ex<->unix#1619: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:31:01 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00430|connmgr|INFO|br-ex<->unix#1624: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:31:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00431|connmgr|INFO|br-ex<->unix#1632: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:31:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00432|connmgr|INFO|br-ex<->unix#1637: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:31:46 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00433|connmgr|INFO|br-ex<->unix#1645: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:32:01 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00434|connmgr|INFO|br-ex<->unix#1650: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:32:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00435|connmgr|INFO|br-ex<->unix#1658: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:32:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00436|connmgr|INFO|br-ex<->unix#1663: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:32:43 ip-10-0-136-68 NetworkManager[1147]: [1677173563.5172] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 17:32:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:32:46.026471869Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=8bc371bc-6556-4017-9562-6af2c722ee66 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:32:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:32:46.026723895Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8bc371bc-6556-4017-9562-6af2c722ee66 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:32:46 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00437|connmgr|INFO|br-ex<->unix#1671: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:32:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00438|connmgr|INFO|br-int<->unix#1484: 1 flow_mods 10 s ago (1 adds) Feb 23 17:33:01 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00439|connmgr|INFO|br-ex<->unix#1676: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:33:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00440|connmgr|INFO|br-ex<->unix#1684: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:33:05 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00441|connmgr|INFO|br-ex<->unix#1687: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:33:20 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00442|connmgr|INFO|br-ex<->unix#1690: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:23.975203 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-dns/node-resolver-pgc9j] Feb 23 17:33:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:23.975372 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-dns/node-resolver-pgc9j" podUID=507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 containerName="dns-node-resolver" containerID="cri-o://1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018" gracePeriod=30 Feb 23 17:33:23 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:23.975736190Z" level=info msg="Stopping container: 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018 (timeout: 30s)" id=63f8b433-c889-4fd4-a1fb-b246a7f9132e name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: crio-conmon-1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018.scope: Succeeded. Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: crio-conmon-1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018.scope: Consumed 24ms CPU time Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: crio-1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018.scope: Succeeded. Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: crio-1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018.scope: Consumed 756ms CPU time Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c03347b5e3b2cb7a1af1d3c38151e5d9f86f1ef2dfadc119f51adb2d876659cb-merged.mount: Succeeded. Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c03347b5e3b2cb7a1af1d3c38151e5d9f86f1ef2dfadc119f51adb2d876659cb-merged.mount: Consumed 0 CPU time Feb 23 17:33:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:24.168434689Z" level=info msg="Stopped container 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018: openshift-dns/node-resolver-pgc9j/dns-node-resolver" id=63f8b433-c889-4fd4-a1fb-b246a7f9132e name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:33:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:24.168745245Z" level=info msg="Stopping pod sandbox: e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68" id=a9c1eb98-bc1a-4adc-8c1a-66a0ead96c3d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-58b50526d77a9cc04beb5a852dc21e35fb6edb8d5abf8ef7ccfbedfed67aef6b-merged.mount: Succeeded. Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-58b50526d77a9cc04beb5a852dc21e35fb6edb8d5abf8ef7ccfbedfed67aef6b-merged.mount: Consumed 0 CPU time Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: run-utsns-5a5a6f17\x2daf8c\x2d4138\x2d8b4b\x2d82a3f4b348b2.mount: Succeeded. Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: run-utsns-5a5a6f17\x2daf8c\x2d4138\x2d8b4b\x2d82a3f4b348b2.mount: Consumed 0 CPU time Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: run-ipcns-5a5a6f17\x2daf8c\x2d4138\x2d8b4b\x2d82a3f4b348b2.mount: Succeeded. Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: run-ipcns-5a5a6f17\x2daf8c\x2d4138\x2d8b4b\x2d82a3f4b348b2.mount: Consumed 0 CPU time Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: run-netns-5a5a6f17\x2daf8c\x2d4138\x2d8b4b\x2d82a3f4b348b2.mount: Succeeded. Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: run-netns-5a5a6f17\x2daf8c\x2d4138\x2d8b4b\x2d82a3f4b348b2.mount: Consumed 0 CPU time Feb 23 17:33:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:24.266708177Z" level=info msg="Stopped pod sandbox: e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68" id=a9c1eb98-bc1a-4adc-8c1a-66a0ead96c3d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.393377 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file\") pod \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.393427 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74mgq\" (UniqueName: \"kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq\") pod \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\" (UID: \"507b846f-eb8a-4ca3-9d5f-e4d9f18eca32\") " Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.393456 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file" (OuterVolumeSpecName: "hosts-file") pod "507b846f-eb8a-4ca3-9d5f-e4d9f18eca32" (UID: "507b846f-eb8a-4ca3-9d5f-e4d9f18eca32"). InnerVolumeSpecName "hosts-file". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.393538 2112 reconciler.go:399] "Volume detached for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-hosts-file\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.410818 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq" (OuterVolumeSpecName: "kube-api-access-74mgq") pod "507b846f-eb8a-4ca3-9d5f-e4d9f18eca32" (UID: "507b846f-eb8a-4ca3-9d5f-e4d9f18eca32"). InnerVolumeSpecName "kube-api-access-74mgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.494058 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-74mgq\" (UniqueName: \"kubernetes.io/projected/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32-kube-api-access-74mgq\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.558329 2112 generic.go:296] "Generic (PLEG): container finished" podID=507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 containerID="1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018" exitCode=0 Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.558365 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pgc9j" event=&{ID:507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 Type:ContainerDied Data:1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018} Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.558404 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pgc9j" event=&{ID:507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 Type:ContainerDied Data:e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68} Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.558423 2112 scope.go:115] "RemoveContainer" containerID="1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018" Feb 23 17:33:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:24.560862564Z" level=info msg="Removing container: 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018" id=3ffa75c2-efb1-4aaf-895f-c65f3ea1ad2a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod507b846f_eb8a_4ca3_9d5f_e4d9f18eca32.slice. Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod507b846f_eb8a_4ca3_9d5f_e4d9f18eca32.slice: Consumed 780ms CPU time Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.577507 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-dns/node-resolver-pgc9j] Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.583266 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-dns/node-resolver-pgc9j] Feb 23 17:33:24 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:24.593945702Z" level=info msg="Removed container 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018: openshift-dns/node-resolver-pgc9j/dns-node-resolver" id=3ffa75c2-efb1-4aaf-895f-c65f3ea1ad2a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.594343 2112 scope.go:115] "RemoveContainer" containerID="1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:33:24.594690 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018\": container with ID starting with 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018 not found: ID does not exist" containerID="1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.594737 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018} err="failed to get container status \"1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018\": rpc error: code = NotFound desc = could not find container \"1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018\": container with ID starting with 1275ee2f983feb3b9931f8c7b65f8c67088dab1463b0fff60c5e6879d2da7018 not found: ID does not exist" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.597448 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-dns/node-resolver-hstcm] Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.597490 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:33:24.597558 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="507b846f-eb8a-4ca3-9d5f-e4d9f18eca32" containerName="dns-node-resolver" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.597570 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="507b846f-eb8a-4ca3-9d5f-e4d9f18eca32" containerName="dns-node-resolver" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:33:24.597582 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="020f3e1e-9ac7-42d0-8b15-bf2ed04169bb" containerName="collect-profiles" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.597590 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="020f3e1e-9ac7-42d0-8b15-bf2ed04169bb" containerName="collect-profiles" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.597652 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="507b846f-eb8a-4ca3-9d5f-e4d9f18eca32" containerName="dns-node-resolver" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.597684 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="020f3e1e-9ac7-42d0-8b15-bf2ed04169bb" containerName="collect-profiles" Feb 23 17:33:24 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod0268b68d_53b2_454a_a03b_37bd38d269bc.slice. Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.796088 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0268b68d-53b2-454a-a03b-37bd38d269bc-hosts-file\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.796144 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvgqb\" (UniqueName: \"kubernetes.io/projected/0268b68d-53b2-454a-a03b-37bd38d269bc-kube-api-access-qvgqb\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.897112 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0268b68d-53b2-454a-a03b-37bd38d269bc-hosts-file\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.897173 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-qvgqb\" (UniqueName: \"kubernetes.io/projected/0268b68d-53b2-454a-a03b-37bd38d269bc-kube-api-access-qvgqb\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.897226 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0268b68d-53b2-454a-a03b-37bd38d269bc-hosts-file\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:33:24 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:24.917542 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvgqb\" (UniqueName: \"kubernetes.io/projected/0268b68d-53b2-454a-a03b-37bd38d269bc-kube-api-access-qvgqb\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:33:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68-userdata-shm.mount: Succeeded. Feb 23 17:33:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:33:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-507b846f\x2deb8a\x2d4ca3\x2d9d5f\x2de4d9f18eca32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74mgq.mount: Succeeded. Feb 23 17:33:25 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-507b846f\x2deb8a\x2d4ca3\x2d9d5f\x2de4d9f18eca32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74mgq.mount: Consumed 0 CPU time Feb 23 17:33:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:25.210031 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hstcm" Feb 23 17:33:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:25.210543907Z" level=info msg="Running pod sandbox: openshift-dns/node-resolver-hstcm/POD" id=a8f86935-d7ca-4129-93ee-c0d34cf34e47 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:33:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:25.210602324Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:33:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:25.228745101Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=a8f86935-d7ca-4129-93ee-c0d34cf34e47 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:33:25 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:33:25.231945 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0268b68d_53b2_454a_a03b_37bd38d269bc.slice/crio-11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874.scope WatchSource:0}: Error finding container 11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874: Status 404 returned error can't find the container with id 11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874 Feb 23 17:33:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:25.233177640Z" level=info msg="Ran pod sandbox 11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874 with infra container: openshift-dns/node-resolver-hstcm/POD" id=a8f86935-d7ca-4129-93ee-c0d34cf34e47 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:33:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:25.234078757Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2" id=7d297090-acc4-443e-821f-883e87096990 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:33:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:25.234244018Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2 not found" id=7d297090-acc4-443e-821f-883e87096990 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:33:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:25.234636 2112 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:33:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:25.234944242Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2" id=d3f93cae-7571-411d-a566-823e7c5edc38 name=/runtime.v1.ImageService/PullImage Feb 23 17:33:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:25.339525649Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2\"" Feb 23 17:33:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:25.561390 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hstcm" event=&{ID:0268b68d-53b2-454a-a03b-37bd38d269bc Type:ContainerStarted Data:11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874} Feb 23 17:33:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:26.117386534Z" level=info msg="Stopping pod sandbox: e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68" id=4b49f707-7369-4e0e-a039-d2466ec29c2d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:33:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:26.117430938Z" level=info msg="Stopped pod sandbox (already stopped): e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68" id=4b49f707-7369-4e0e-a039-d2466ec29c2d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:33:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:26.118735 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=507b846f-eb8a-4ca3-9d5f-e4d9f18eca32 path="/var/lib/kubelet/pods/507b846f-eb8a-4ca3-9d5f-e4d9f18eca32/volumes" Feb 23 17:33:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:26.957480722Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2\"" Feb 23 17:33:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:29.166071111Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2" id=d3f93cae-7571-411d-a566-823e7c5edc38 name=/runtime.v1.ImageService/PullImage Feb 23 17:33:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:29.166632311Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2" id=a016d926-c34c-4769-ae7b-80d3b76a7d04 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:33:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:29.167858005Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1e42f0d82119151973b0cb36d0d109fca6e5a46c8410cba8eaa2a9867c1cc9ab,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2],Size_:492745678,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=a016d926-c34c-4769-ae7b-80d3b76a7d04 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:33:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:29.168618383Z" level=info msg="Creating container: openshift-dns/node-resolver-hstcm/dns-node-resolver" id=aaa00ec9-6ec1-4241-9bea-8cc3c52f4367 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:33:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:29.168727563Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:33:29 ip-10-0-136-68 systemd[1]: Started crio-conmon-007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c.scope. Feb 23 17:33:29 ip-10-0-136-68 systemd[1]: Started libcontainer container 007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c. Feb 23 17:33:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:29.322379207Z" level=info msg="Created container 007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c: openshift-dns/node-resolver-hstcm/dns-node-resolver" id=aaa00ec9-6ec1-4241-9bea-8cc3c52f4367 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:33:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:29.322810825Z" level=info msg="Starting container: 007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c" id=7f5c0c77-4732-490a-b710-c0fa1fd21c20 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:33:29 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:29.330047415Z" level=info msg="Started container" PID=76470 containerID=007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c description=openshift-dns/node-resolver-hstcm/dns-node-resolver id=7f5c0c77-4732-490a-b710-c0fa1fd21c20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874 Feb 23 17:33:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:29.571049 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hstcm" event=&{ID:0268b68d-53b2-454a-a03b-37bd38d269bc Type:ContainerStarted Data:007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c} Feb 23 17:33:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00443|connmgr|INFO|br-ex<->unix#1700: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:33:50 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00444|connmgr|INFO|br-ex<->unix#1703: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:33:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:52.425512938Z" level=info msg="Stopping pod sandbox: e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68" id=e3963405-f61e-4b4d-a5be-7b65c38fdea0 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:33:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:52.425558161Z" level=info msg="Stopped pod sandbox (already stopped): e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68" id=e3963405-f61e-4b4d-a5be-7b65c38fdea0 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:33:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:52.425800964Z" level=info msg="Removing pod sandbox: e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68" id=69e1eac1-bd90-4a5e-8449-1b3541217165 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:33:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:52.434052595Z" level=info msg="Removed pod sandbox: e312341dbb64adc897b934a3c1de671a584600ca6cafe84039623c05362bfd68" id=69e1eac1-bd90-4a5e-8449-1b3541217165 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:33:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:33:52.435067 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2\": container with ID starting with dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2 not found: ID does not exist" containerID="dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2" Feb 23 17:33:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:52.435098 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2" err="rpc error: code = NotFound desc = could not find container \"dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2\": container with ID starting with dbee715502369174db42d97b2506ad563352efe30d5cb9c9ccd42d69d480bda2 not found: ID does not exist" Feb 23 17:33:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:55.089355 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-dns/dns-default-h4ftg] Feb 23 17:33:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:55.089568 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="dns" containerID="cri-o://2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8" gracePeriod=30 Feb 23 17:33:55 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:55.089601 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="kube-rbac-proxy" containerID="cri-o://0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7" gracePeriod=30 Feb 23 17:33:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:55.089869129Z" level=info msg="Stopping container: 0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7 (timeout: 30s)" id=96206912-fab4-44be-a8b5-6091ba8e3b31 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:33:55 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:55.089895890Z" level=info msg="Stopping container: 2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8 (timeout: 30s)" id=7344dfaf-0231-4126-b954-2264c3d01f2c name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:33:55 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00445|connmgr|INFO|br-ex<->unix#1707: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:33:55 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00446|connmgr|INFO|br-ex<->unix#1710: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:33:56 ip-10-0-136-68 systemd[1]: crio-0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7.scope: Succeeded. Feb 23 17:33:56 ip-10-0-136-68 systemd[1]: crio-0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7.scope: Consumed 438ms CPU time Feb 23 17:33:56 ip-10-0-136-68 systemd[1]: crio-conmon-0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7.scope: Succeeded. Feb 23 17:33:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00447|connmgr|INFO|br-int<->unix#1484: 88 flow_mods in the 54 s starting 55 s ago (43 adds, 32 deletes, 13 modifications) Feb 23 17:33:56 ip-10-0-136-68 systemd[1]: crio-conmon-0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7.scope: Consumed 25ms CPU time Feb 23 17:33:56 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7c616537f75055fdc741231462bb0d577e7129066d605e4df7607c6d20d4d492-merged.mount: Succeeded. Feb 23 17:33:56 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7c616537f75055fdc741231462bb0d577e7129066d605e4df7607c6d20d4d492-merged.mount: Consumed 0 CPU time Feb 23 17:33:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:33:56.275769430Z" level=info msg="Stopped container 0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7: openshift-dns/dns-default-h4ftg/kube-rbac-proxy" id=96206912-fab4-44be-a8b5-6091ba8e3b31 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:33:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:56.628874 2112 generic.go:296] "Generic (PLEG): container finished" podID=c072a683-1031-40cb-a1bc-1dac71bca46b containerID="0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7" exitCode=0 Feb 23 17:33:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:56.628906 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerDied Data:0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7} Feb 23 17:33:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:57.551402 2112 patch_prober.go:29] interesting pod/dns-default-h4ftg container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" start-of-body= Feb 23 17:33:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:33:57.551460 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="dns" probeResult=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" Feb 23 17:34:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:00.551382 2112 patch_prober.go:29] interesting pod/dns-default-h4ftg container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" start-of-body= Feb 23 17:34:00 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:00.551437 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="dns" probeResult=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" Feb 23 17:34:03 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:03.551474 2112 patch_prober.go:29] interesting pod/dns-default-h4ftg container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" start-of-body= Feb 23 17:34:03 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:03.551532 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="dns" probeResult=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" Feb 23 17:34:03 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:03.551602 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-h4ftg" Feb 23 17:34:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:06.550840 2112 patch_prober.go:29] interesting pod/dns-default-h4ftg container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" start-of-body= Feb 23 17:34:06 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:06.550896 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="dns" probeResult=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" Feb 23 17:34:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:09.551397 2112 patch_prober.go:29] interesting pod/dns-default-h4ftg container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" start-of-body= Feb 23 17:34:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:09.551460 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="dns" probeResult=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" Feb 23 17:34:10 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00448|connmgr|INFO|br-ex<->unix#1719: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:34:12 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:12.551569 2112 patch_prober.go:29] interesting pod/dns-default-h4ftg container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" start-of-body= Feb 23 17:34:12 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:12.551630 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="dns" probeResult=failure output="Get \"http://10.129.2.6:8181/ready\": dial tcp 10.129.2.6:8181: connect: connection refused" Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: crio-2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8.scope: Succeeded. Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: crio-2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8.scope: Consumed 5.809s CPU time Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: crio-conmon-2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8.scope: Succeeded. Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: crio-conmon-2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8.scope: Consumed 24ms CPU time Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-abb8f056ccdfd27038dd03a2647d651798716e0776185da5f1efe6e79ea9cbe7-merged.mount: Succeeded. Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-abb8f056ccdfd27038dd03a2647d651798716e0776185da5f1efe6e79ea9cbe7-merged.mount: Consumed 0 CPU time Feb 23 17:34:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:15.264862151Z" level=info msg="Stopped container 2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8: openshift-dns/dns-default-h4ftg/dns" id=7344dfaf-0231-4126-b954-2264c3d01f2c name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:34:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:15.265303821Z" level=info msg="Stopping pod sandbox: cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e" id=e1f86e1f-9359-4fcd-8e35-3ccee64667e4 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:34:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:15.265505587Z" level=info msg="Got pod network &{Name:dns-default-h4ftg Namespace:openshift-dns ID:cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e UID:c072a683-1031-40cb-a1bc-1dac71bca46b NetNS:/var/run/netns/9dd40d86-4a83-40d4-b955-0eaf699cad8c Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:34:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:15.265617695Z" level=info msg="Deleting pod openshift-dns_dns-default-h4ftg from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:34:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00449|bridge|INFO|bridge br-int: deleted interface cb072e675296b6a on port 11 Feb 23 17:34:15 ip-10-0-136-68 kernel: device cb072e675296b6a left promiscuous mode Feb 23 17:34:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:15.671113 2112 generic.go:296] "Generic (PLEG): container finished" podID=c072a683-1031-40cb-a1bc-1dac71bca46b containerID="2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8" exitCode=0 Feb 23 17:34:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:15.671155 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerDied Data:2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8} Feb 23 17:34:15 ip-10-0-136-68 crio[2062]: 2023-02-23T17:34:15Z [verbose] Del: openshift-dns:dns-default-h4ftg:c072a683-1031-40cb-a1bc-1dac71bca46b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:34:15 ip-10-0-136-68 crio[2062]: I0223 17:34:15.398044 77017 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-be0b36b0aa087f9d0733921e02ee85f62014718e5acce41b2455d2d9959a137b-merged.mount: Succeeded. Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-be0b36b0aa087f9d0733921e02ee85f62014718e5acce41b2455d2d9959a137b-merged.mount: Consumed 0 CPU time Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: run-utsns-9dd40d86\x2d4a83\x2d40d4\x2db955\x2d0eaf699cad8c.mount: Succeeded. Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: run-utsns-9dd40d86\x2d4a83\x2d40d4\x2db955\x2d0eaf699cad8c.mount: Consumed 0 CPU time Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: run-ipcns-9dd40d86\x2d4a83\x2d40d4\x2db955\x2d0eaf699cad8c.mount: Succeeded. Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: run-ipcns-9dd40d86\x2d4a83\x2d40d4\x2db955\x2d0eaf699cad8c.mount: Consumed 0 CPU time Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: run-netns-9dd40d86\x2d4a83\x2d40d4\x2db955\x2d0eaf699cad8c.mount: Succeeded. Feb 23 17:34:15 ip-10-0-136-68 systemd[1]: run-netns-9dd40d86\x2d4a83\x2d40d4\x2db955\x2d0eaf699cad8c.mount: Consumed 0 CPU time Feb 23 17:34:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:15.925725483Z" level=info msg="Stopped pod sandbox: cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e" id=e1f86e1f-9359-4fcd-8e35-3ccee64667e4 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.040578 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2zwz\" (UniqueName: \"kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz\") pod \"c072a683-1031-40cb-a1bc-1dac71bca46b\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.040613 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume\") pod \"c072a683-1031-40cb-a1bc-1dac71bca46b\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.040640 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls\") pod \"c072a683-1031-40cb-a1bc-1dac71bca46b\" (UID: \"c072a683-1031-40cb-a1bc-1dac71bca46b\") " Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:34:16.040961 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c072a683-1031-40cb-a1bc-1dac71bca46b/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.041197 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c072a683-1031-40cb-a1bc-1dac71bca46b" (UID: "c072a683-1031-40cb-a1bc-1dac71bca46b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.054903 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "c072a683-1031-40cb-a1bc-1dac71bca46b" (UID: "c072a683-1031-40cb-a1bc-1dac71bca46b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.055839 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz" (OuterVolumeSpecName: "kube-api-access-w2zwz") pod "c072a683-1031-40cb-a1bc-1dac71bca46b" (UID: "c072a683-1031-40cb-a1bc-1dac71bca46b"). InnerVolumeSpecName "kube-api-access-w2zwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podc072a683_1031_40cb_a1bc_1dac71bca46b.slice. Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: kubepods-burstable-podc072a683_1031_40cb_a1bc_1dac71bca46b.slice: Consumed 6.298s CPU time Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.141292 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-w2zwz\" (UniqueName: \"kubernetes.io/projected/c072a683-1031-40cb-a1bc-1dac71bca46b-kube-api-access-w2zwz\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.141346 2112 reconciler.go:399] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072a683-1031-40cb-a1bc-1dac71bca46b-config-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.141362 2112 reconciler.go:399] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c072a683-1031-40cb-a1bc-1dac71bca46b-metrics-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e-userdata-shm.mount: Succeeded. Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-c072a683\x2d1031\x2d40cb\x2da1bc\x2d1dac71bca46b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2zwz.mount: Succeeded. Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-c072a683\x2d1031\x2d40cb\x2da1bc\x2d1dac71bca46b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2zwz.mount: Consumed 0 CPU time Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-c072a683\x2d1031\x2d40cb\x2da1bc\x2d1dac71bca46b-volumes-kubernetes.io\x7esecret-metrics\x2dtls.mount: Succeeded. Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-c072a683\x2d1031\x2d40cb\x2da1bc\x2d1dac71bca46b-volumes-kubernetes.io\x7esecret-metrics\x2dtls.mount: Consumed 0 CPU time Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.674238 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4ftg" event=&{ID:c072a683-1031-40cb-a1bc-1dac71bca46b Type:ContainerDied Data:cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e} Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.674280 2112 scope.go:115] "RemoveContainer" containerID="0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7" Feb 23 17:34:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:16.675305427Z" level=info msg="Removing container: 0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7" id=8c4f5b0a-3792-469c-91c5-683f462d0ffe name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:34:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:16.695230983Z" level=info msg="Removed container 0d048668d6ed4fec51f73d95330a6df3b517ae95afe1dc32e0a7e5966a3badf7: openshift-dns/dns-default-h4ftg/kube-rbac-proxy" id=8c4f5b0a-3792-469c-91c5-683f462d0ffe name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.695595 2112 scope.go:115] "RemoveContainer" containerID="2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.696257 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-dns/dns-default-h4ftg] Feb 23 17:34:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:16.696454544Z" level=info msg="Removing container: 2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8" id=2d7702bb-08c0-4534-a7d0-1309fb718512 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.701114 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-dns/dns-default-h4ftg] Feb 23 17:34:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:16.725995408Z" level=info msg="Removed container 2badf436a3d373d56e3424a33e20c905dc6d2e06be3ab3526672b28202f77dc8: openshift-dns/dns-default-h4ftg/dns" id=2d7702bb-08c0-4534-a7d0-1309fb718512 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.736913 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-dns/dns-default-657v4] Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.736994 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:34:16.737198 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c072a683-1031-40cb-a1bc-1dac71bca46b" containerName="dns" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.737251 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="c072a683-1031-40cb-a1bc-1dac71bca46b" containerName="dns" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:34:16.737265 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c072a683-1031-40cb-a1bc-1dac71bca46b" containerName="kube-rbac-proxy" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.737273 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="c072a683-1031-40cb-a1bc-1dac71bca46b" containerName="kube-rbac-proxy" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.737332 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="c072a683-1031-40cb-a1bc-1dac71bca46b" containerName="dns" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.737347 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="c072a683-1031-40cb-a1bc-1dac71bca46b" containerName="kube-rbac-proxy" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.745075 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-dns/dns-default-657v4] Feb 23 17:34:16 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod757b7544_c265_49ce_a1f0_22cca4bf919f.slice. Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.844502 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/757b7544-c265-49ce-a1f0-22cca4bf919f-metrics-tls\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.844547 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9qm\" (UniqueName: \"kubernetes.io/projected/757b7544-c265-49ce-a1f0-22cca4bf919f-kube-api-access-4z9qm\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.844732 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/757b7544-c265-49ce-a1f0-22cca4bf919f-config-volume\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.945291 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/757b7544-c265-49ce-a1f0-22cca4bf919f-config-volume\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.945328 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/757b7544-c265-49ce-a1f0-22cca4bf919f-metrics-tls\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.945357 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9qm\" (UniqueName: \"kubernetes.io/projected/757b7544-c265-49ce-a1f0-22cca4bf919f-kube-api-access-4z9qm\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.946174 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/757b7544-c265-49ce-a1f0-22cca4bf919f-config-volume\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.947872 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/757b7544-c265-49ce-a1f0-22cca4bf919f-metrics-tls\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:16.963422 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9qm\" (UniqueName: \"kubernetes.io/projected/757b7544-c265-49ce-a1f0-22cca4bf919f-kube-api-access-4z9qm\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:34:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:17.051142 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.051647207Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=076cd0d2-fdf0-4d7d-9a9a-2ed41cdf3e8f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.051725698Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.071495727Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/68c19172-c7c8-4a6b-880c-e79152a16a50 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.071525300Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:34:17 ip-10-0-136-68 systemd-udevd[77094]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 23 17:34:17 ip-10-0-136-68 systemd-udevd[77094]: Could not generate persistent MAC address for 9ac9106efc7becf: No such file or directory Feb 23 17:34:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): 9ac9106efc7becf: link is not ready Feb 23 17:34:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 17:34:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 17:34:17 ip-10-0-136-68 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 9ac9106efc7becf: link becomes ready Feb 23 17:34:17 ip-10-0-136-68 NetworkManager[1147]: [1677173657.2209] device (9ac9106efc7becf): carrier: link connected Feb 23 17:34:17 ip-10-0-136-68 NetworkManager[1147]: [1677173657.2229] manager: (9ac9106efc7becf): new Veth device (/org/freedesktop/NetworkManager/Devices/80) Feb 23 17:34:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00450|bridge|INFO|bridge br-int: added interface 9ac9106efc7becf on port 35 Feb 23 17:34:17 ip-10-0-136-68 NetworkManager[1147]: [1677173657.2462] manager: (9ac9106efc7becf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81) Feb 23 17:34:17 ip-10-0-136-68 kernel: device 9ac9106efc7becf entered promiscuous mode Feb 23 17:34:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:17.314547 2112 kubelet.go:2126] "SyncLoop UPDATE" source="api" pods=[openshift-dns/dns-default-657v4] Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: I0223 17:34:17.200634 77083 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: 2023-02-23T17:34:17Z [verbose] Add: openshift-dns:dns-default-657v4:757b7544-c265-49ce-a1f0-22cca4bf919f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9ac9106efc7becf","mac":"9e:63:d8:32:7a:04"},{"name":"eth0","mac":"0a:58:0a:81:02:04","sandbox":"/var/run/netns/68c19172-c7c8-4a6b-880c-e79152a16a50"}],"ips":[{"version":"4","interface":1,"address":"10.129.2.4/23","gateway":"10.129.2.1"}],"dns":{}} Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: I0223 17:34:17.297126 77076 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-657v4", UID:"757b7544-c265-49ce-a1f0-22cca4bf919f", APIVersion:"v1", ResourceVersion:"82038", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.129.2.4/23] from ovn-kubernetes Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.316000180Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/68c19172-c7c8-4a6b-880c-e79152a16a50 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.316134953Z" level=info msg="Checking pod openshift-dns_dns-default-657v4 for CNI network multus-cni-network (type=multus)" Feb 23 17:34:17 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:34:17.319133 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod757b7544_c265_49ce_a1f0_22cca4bf919f.slice/crio-9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87.scope WatchSource:0}: Error finding container 9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87: Status 404 returned error can't find the container with id 9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87 Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.321969787Z" level=info msg="Ran pod sandbox 9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87 with infra container: openshift-dns/dns-default-657v4/POD" id=076cd0d2-fdf0-4d7d-9a9a-2ed41cdf3e8f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.322739591Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:f8fcc51da90bcdc9708708ef2d3daa1a9d147c4897dd7d51296b61d8b1d353a5" id=fbdca0a2-b99d-492a-b66f-6c8871f08bfd name=/runtime.v1.ImageService/ImageStatus Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.322936963Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:f8fcc51da90bcdc9708708ef2d3daa1a9d147c4897dd7d51296b61d8b1d353a5 not found" id=fbdca0a2-b99d-492a-b66f-6c8871f08bfd name=/runtime.v1.ImageService/ImageStatus Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.323459548Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:f8fcc51da90bcdc9708708ef2d3daa1a9d147c4897dd7d51296b61d8b1d353a5" id=2f039bfe-c57c-4e3e-9b2a-6cdf6a3cc9d1 name=/runtime.v1.ImageService/PullImage Feb 23 17:34:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:17.325829231Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:f8fcc51da90bcdc9708708ef2d3daa1a9d147c4897dd7d51296b61d8b1d353a5\"" Feb 23 17:34:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:17.677273 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-657v4" event=&{ID:757b7544-c265-49ce-a1f0-22cca4bf919f Type:ContainerStarted Data:9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87} Feb 23 17:34:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:18.120045 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b path="/var/lib/kubelet/pods/c072a683-1031-40cb-a1bc-1dac71bca46b/volumes" Feb 23 17:34:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:18.402313749Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:f8fcc51da90bcdc9708708ef2d3daa1a9d147c4897dd7d51296b61d8b1d353a5\"" Feb 23 17:34:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:18.550944 2112 patch_prober.go:29] interesting pod/dns-default-h4ftg container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.129.2.6:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 17:34:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:18.551018 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-h4ftg" podUID=c072a683-1031-40cb-a1bc-1dac71bca46b containerName="dns" probeResult=failure output="Get \"http://10.129.2.6:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.345976882Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:f8fcc51da90bcdc9708708ef2d3daa1a9d147c4897dd7d51296b61d8b1d353a5" id=2f039bfe-c57c-4e3e-9b2a-6cdf6a3cc9d1 name=/runtime.v1.ImageService/PullImage Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.346708369Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:f8fcc51da90bcdc9708708ef2d3daa1a9d147c4897dd7d51296b61d8b1d353a5" id=d32b2fd4-9245-4d72-93c4-59b2fe17266d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.347899644Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b2febb2f351a614ed9fc603a45be3f55cf7bc928586546713ac7534d4a14008c,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:f8fcc51da90bcdc9708708ef2d3daa1a9d147c4897dd7d51296b61d8b1d353a5],Size_:419173685,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=d32b2fd4-9245-4d72-93c4-59b2fe17266d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.348548408Z" level=info msg="Creating container: openshift-dns/dns-default-657v4/dns" id=8499d6b0-5c8b-48e0-abcd-da30eea6f468 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.348648397Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:34:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3.scope. Feb 23 17:34:22 ip-10-0-136-68 systemd[1]: Started libcontainer container 9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3. Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.537403203Z" level=info msg="Created container 9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3: openshift-dns/dns-default-657v4/dns" id=8499d6b0-5c8b-48e0-abcd-da30eea6f468 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.538030184Z" level=info msg="Starting container: 9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3" id=a411e0ad-9e5b-4149-aeb0-9f5598f92a85 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.545336556Z" level=info msg="Started container" PID=77176 containerID=9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3 description=openshift-dns/dns-default-657v4/dns id=a411e0ad-9e5b-4149-aeb0-9f5598f92a85 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87 Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.566640734Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=97181580-617f-4dfa-83e6-55713c4deffe name=/runtime.v1.ImageService/ImageStatus Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.566864117Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=97181580-617f-4dfa-83e6-55713c4deffe name=/runtime.v1.ImageService/ImageStatus Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.568341086Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=84786c20-504c-4408-8827-c8a67a4f6019 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.568497514Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=84786c20-504c-4408-8827-c8a67a4f6019 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.569273828Z" level=info msg="Creating container: openshift-dns/dns-default-657v4/kube-rbac-proxy" id=65231b4c-118f-4060-8540-62f558052156 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.569371514Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:34:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422.scope. Feb 23 17:34:22 ip-10-0-136-68 systemd[1]: Started libcontainer container 63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422. Feb 23 17:34:22 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:22.688564 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-657v4" event=&{ID:757b7544-c265-49ce-a1f0-22cca4bf919f Type:ContainerStarted Data:9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3} Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.762201330Z" level=info msg="Created container 63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422: openshift-dns/dns-default-657v4/kube-rbac-proxy" id=65231b4c-118f-4060-8540-62f558052156 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.762801509Z" level=info msg="Starting container: 63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422" id=b1086572-c563-4956-bf13-0b03e3438c66 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:34:22 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:22.781469130Z" level=info msg="Started container" PID=77221 containerID=63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422 description=openshift-dns/dns-default-657v4/kube-rbac-proxy id=b1086572-c563-4956-bf13-0b03e3438c66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87 Feb 23 17:34:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:23.691403 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-657v4" event=&{ID:757b7544-c265-49ce-a1f0-22cca4bf919f Type:ContainerStarted Data:63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422} Feb 23 17:34:23 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:23.691514 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-657v4" Feb 23 17:34:25 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00451|connmgr|INFO|br-ex<->unix#1723: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:34:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:32.052945 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-657v4" Feb 23 17:34:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00452|connmgr|INFO|br-ex<->unix#1732: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:34:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00453|connmgr|INFO|br-ex<->unix#1735: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:34:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00454|connmgr|INFO|br-ex<->unix#1738: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:34:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:52.437320393Z" level=info msg="Stopping pod sandbox: cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e" id=f0eb3469-eeea-401f-9b0d-5343b0a2bd16 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:34:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:52.437374879Z" level=info msg="Stopped pod sandbox (already stopped): cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e" id=f0eb3469-eeea-401f-9b0d-5343b0a2bd16 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:34:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:52.437580782Z" level=info msg="Removing pod sandbox: cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e" id=e666220e-fc4e-472c-b30c-62988dfb3053 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:34:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:34:52.446565340Z" level=info msg="Removed pod sandbox: cb072e675296b6a74623a0a4aa40ec554a26d6b1b7c32048297353c0abb8cb6e" id=e666220e-fc4e-472c-b30c-62988dfb3053 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:34:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:34:52.447587 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c\": container with ID starting with 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c not found: ID does not exist" containerID="2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c" Feb 23 17:34:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:52.447625 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c" err="rpc error: code = NotFound desc = could not find container \"2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c\": container with ID starting with 2ae211733b9d38f87c856cc5d2197f3933006872bbf3d858ad27eb7b5cc2415c not found: ID does not exist" Feb 23 17:34:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:34:52.447928 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64\": container with ID starting with bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64 not found: ID does not exist" containerID="bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64" Feb 23 17:34:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:34:52.447950 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64" err="rpc error: code = NotFound desc = could not find container \"bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64\": container with ID starting with bfc013878334f77e66b5ee2a9bd1b1d89d15e37cbfe933e68e4481f20f438f64 not found: ID does not exist" Feb 23 17:34:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00455|connmgr|INFO|br-int<->unix#1484: 121 flow_mods in the 37 s starting 40 s ago (54 adds, 66 deletes, 1 modifications) Feb 23 17:35:02 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00456|connmgr|INFO|br-ex<->unix#1748: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:35:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00457|connmgr|INFO|br-ex<->unix#1751: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:35:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00458|connmgr|INFO|br-ex<->unix#1762: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:35:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00459|connmgr|INFO|br-ex<->unix#1765: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:35:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00460|connmgr|INFO|br-int<->unix#1484: 26 flow_mods in the 36 s starting 47 s ago (13 adds, 13 deletes) Feb 23 17:36:02 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00461|connmgr|INFO|br-ex<->unix#1775: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:36:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00462|connmgr|INFO|br-ex<->unix#1778: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:36:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00463|connmgr|INFO|br-ex<->unix#1788: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:36:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00464|connmgr|INFO|br-ex<->unix#1791: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:36:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00465|connmgr|INFO|br-int<->unix#1484: 16 flow_mods in the 36 s starting 49 s ago (8 adds, 8 deletes) Feb 23 17:37:02 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00466|connmgr|INFO|br-ex<->unix#1801: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:37:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00467|connmgr|INFO|br-ex<->unix#1804: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:37:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00468|connmgr|INFO|br-ex<->unix#1814: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:37:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00469|connmgr|INFO|br-ex<->unix#1817: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:37:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00470|connmgr|INFO|br-ex<->unix#1820: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:37:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00471|connmgr|INFO|br-ex<->unix#1823: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:37:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00472|connmgr|INFO|br-ex<->unix#1826: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:37:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:37:46.030538349Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=400c7f74-c5f6-47e9-9655-ae117d4a98df name=/runtime.v1.ImageService/ImageStatus Feb 23 17:37:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:37:46.030798702Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=400c7f74-c5f6-47e9-9655-ae117d4a98df name=/runtime.v1.ImageService/ImageStatus Feb 23 17:37:51 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00473|connmgr|INFO|br-ex<->unix#1829: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:37:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00474|connmgr|INFO|br-int<->unix#1484: 8 flow_mods in the 47 s starting 57 s ago (4 adds, 4 deletes) Feb 23 17:38:06 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00475|connmgr|INFO|br-ex<->unix#1839: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:38:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00476|connmgr|INFO|br-ex<->unix#1842: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:38:36 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00477|connmgr|INFO|br-ex<->unix#1852: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:38:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:49.848073 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-machine-config-operator/machine-config-daemon-d5wlc] Feb 23 17:38:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:49.848264 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" podUID=b97e7fe5-fe52-4769-bb52-fc233e05c05e containerName="machine-config-daemon" containerID="cri-o://66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065" gracePeriod=600 Feb 23 17:38:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:49.848391 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" podUID=b97e7fe5-fe52-4769-bb52-fc233e05c05e containerName="oauth-proxy" containerID="cri-o://b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887" gracePeriod=600 Feb 23 17:38:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:49.848684121Z" level=info msg="Stopping container: 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065 (timeout: 600s)" id=de9c8958-b0b0-4cb6-90e6-f7e72d334bae name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:38:49 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:49.848700896Z" level=info msg="Stopping container: b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887 (timeout: 600s)" id=99f0daf4-b4a6-4f6a-bd7a-74879db03456 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:38:49 ip-10-0-136-68 conmon[2463]: conmon b0d136d5989001adcb82 : container 2488 exited with status 143 Feb 23 17:38:49 ip-10-0-136-68 systemd[1]: crio-conmon-b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887.scope: Succeeded. Feb 23 17:38:49 ip-10-0-136-68 systemd[1]: crio-conmon-b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887.scope: Consumed 25ms CPU time Feb 23 17:38:49 ip-10-0-136-68 systemd[1]: crio-b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887.scope: Succeeded. Feb 23 17:38:49 ip-10-0-136-68 systemd[1]: crio-b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887.scope: Consumed 1.184s CPU time Feb 23 17:38:49 ip-10-0-136-68 systemd[1]: crio-conmon-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope: Succeeded. Feb 23 17:38:49 ip-10-0-136-68 systemd[1]: crio-conmon-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope: Consumed 25ms CPU time Feb 23 17:38:49 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00478|connmgr|INFO|br-ex<->unix#1855: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:38:49 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00479|connmgr|INFO|br-ex<->unix#1858: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-25229a5295065bbc598aaff60bf0cb1ead1c259dec1503253d8bf391a13b95bb-merged.mount: Succeeded. Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-25229a5295065bbc598aaff60bf0cb1ead1c259dec1503253d8bf391a13b95bb-merged.mount: Consumed 0 CPU time Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope: Succeeded. Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: crio-66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065.scope: Consumed 1.887s CPU time Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.027849458Z" level=info msg="Stopped container b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887: openshift-machine-config-operator/machine-config-daemon-d5wlc/oauth-proxy" id=99f0daf4-b4a6-4f6a-bd7a-74879db03456 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8f219fb3e632398e49ca305047721ed68b67dbdf3ae7b7f7b4ded5905bbc003f-merged.mount: Succeeded. Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8f219fb3e632398e49ca305047721ed68b67dbdf3ae7b7f7b4ded5905bbc003f-merged.mount: Consumed 0 CPU time Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.046147911Z" level=info msg="Stopped container 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065: openshift-machine-config-operator/machine-config-daemon-d5wlc/machine-config-daemon" id=de9c8958-b0b0-4cb6-90e6-f7e72d334bae name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.046505731Z" level=info msg="Stopping pod sandbox: 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809" id=1a0c2eca-c2ab-4c60-873b-729b3dae5681 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f725e728c0ec269d2cc7b89f860ff7001e4c6f0a67fbcff7f2dd0ef87ac4ab25-merged.mount: Succeeded. Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f725e728c0ec269d2cc7b89f860ff7001e4c6f0a67fbcff7f2dd0ef87ac4ab25-merged.mount: Consumed 0 CPU time Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: run-utsns-7bede96e\x2d8124\x2d429b\x2d8d3d\x2db223763195c1.mount: Succeeded. Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: run-utsns-7bede96e\x2d8124\x2d429b\x2d8d3d\x2db223763195c1.mount: Consumed 0 CPU time Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: run-ipcns-7bede96e\x2d8124\x2d429b\x2d8d3d\x2db223763195c1.mount: Succeeded. Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: run-ipcns-7bede96e\x2d8124\x2d429b\x2d8d3d\x2db223763195c1.mount: Consumed 0 CPU time Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.138709609Z" level=info msg="Stopped pod sandbox: 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809" id=1a0c2eca-c2ab-4c60-873b-729b3dae5681 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.268383 2112 generic.go:296] "Generic (PLEG): container finished" podID=b97e7fe5-fe52-4769-bb52-fc233e05c05e containerID="b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887" exitCode=143 Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.268410 2112 generic.go:296] "Generic (PLEG): container finished" podID=b97e7fe5-fe52-4769-bb52-fc233e05c05e containerID="66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065" exitCode=0 Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.268437 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerDied Data:b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887} Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.268461 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerDied Data:66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065} Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.268473 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d5wlc" event=&{ID:b97e7fe5-fe52-4769-bb52-fc233e05c05e Type:ContainerDied Data:948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809} Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.268489 2112 scope.go:115] "RemoveContainer" containerID="b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887" Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.269209080Z" level=info msg="Removing container: b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887" id=993667fa-2d36-457c-953d-05ecdbb75d9e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.288800718Z" level=info msg="Removed container b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887: openshift-machine-config-operator/machine-config-daemon-d5wlc/oauth-proxy" id=993667fa-2d36-457c-953d-05ecdbb75d9e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.288942 2112 scope.go:115] "RemoveContainer" containerID="66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065" Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.289492766Z" level=info msg="Removing container: 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065" id=ef59a83e-20bf-4cb6-aebf-6cc7d0351500 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.307413121Z" level=info msg="Removed container 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065: openshift-machine-config-operator/machine-config-daemon-d5wlc/machine-config-daemon" id=ef59a83e-20bf-4cb6-aebf-6cc7d0351500 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.307620 2112 scope.go:115] "RemoveContainer" containerID="b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:38:50.307912 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887\": container with ID starting with b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887 not found: ID does not exist" containerID="b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.307952 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887} err="failed to get container status \"b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887\": rpc error: code = NotFound desc = could not find container \"b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887\": container with ID starting with b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887 not found: ID does not exist" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.307965 2112 scope.go:115] "RemoveContainer" containerID="66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:38:50.308151 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065\": container with ID starting with 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065 not found: ID does not exist" containerID="66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.308176 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065} err="failed to get container status \"66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065\": rpc error: code = NotFound desc = could not find container \"66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065\": container with ID starting with 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065 not found: ID does not exist" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.308200 2112 scope.go:115] "RemoveContainer" containerID="b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.308340 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887} err="failed to get container status \"b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887\": rpc error: code = NotFound desc = could not find container \"b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887\": container with ID starting with b0d136d5989001adcb824f34cd619701913fd6b05efeeab47452aa99551fd887 not found: ID does not exist" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.308356 2112 scope.go:115] "RemoveContainer" containerID="66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.308560 2112 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065} err="failed to get container status \"66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065\": rpc error: code = NotFound desc = could not find container \"66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065\": container with ID starting with 66d294f1bd164afef67a512f0c741a2a0fdca6f95539ae28a8abb059b9814065 not found: ID does not exist" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.339997 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29j2\" (UniqueName: \"kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2\") pod \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.340083 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs\") pod \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.340109 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret\") pod \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.340134 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls\") pod \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\" (UID: \"b97e7fe5-fe52-4769-bb52-fc233e05c05e\") " Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.340141 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs" (OuterVolumeSpecName: "rootfs") pod "b97e7fe5-fe52-4769-bb52-fc233e05c05e" (UID: "b97e7fe5-fe52-4769-bb52-fc233e05c05e"). InnerVolumeSpecName "rootfs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.350856 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b97e7fe5-fe52-4769-bb52-fc233e05c05e" (UID: "b97e7fe5-fe52-4769-bb52-fc233e05c05e"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.350880 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2" (OuterVolumeSpecName: "kube-api-access-m29j2") pod "b97e7fe5-fe52-4769-bb52-fc233e05c05e" (UID: "b97e7fe5-fe52-4769-bb52-fc233e05c05e"). InnerVolumeSpecName "kube-api-access-m29j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.358793 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret" (OuterVolumeSpecName: "cookie-secret") pod "b97e7fe5-fe52-4769-bb52-fc233e05c05e" (UID: "b97e7fe5-fe52-4769-bb52-fc233e05c05e"). InnerVolumeSpecName "cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.441331 2112 reconciler.go:399] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-proxy-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.441368 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-m29j2\" (UniqueName: \"kubernetes.io/projected/b97e7fe5-fe52-4769-bb52-fc233e05c05e-kube-api-access-m29j2\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.441382 2112 reconciler.go:399] "Volume detached for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b97e7fe5-fe52-4769-bb52-fc233e05c05e-rootfs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.441394 2112 reconciler.go:399] "Volume detached for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/b97e7fe5-fe52-4769-bb52-fc233e05c05e-cookie-secret\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podb97e7fe5_fe52_4769_bb52_fc233e05c05e.slice. Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: kubepods-burstable-podb97e7fe5_fe52_4769_bb52_fc233e05c05e.slice: Consumed 3.122s CPU time Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.588209 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-machine-config-operator/machine-config-daemon-d5wlc] Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.591811 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-machine-config-operator/machine-config-daemon-d5wlc] Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.615208 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-machine-config-operator/machine-config-daemon-2fx68] Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.615256 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:38:50.615315 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b97e7fe5-fe52-4769-bb52-fc233e05c05e" containerName="machine-config-daemon" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.615325 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e7fe5-fe52-4769-bb52-fc233e05c05e" containerName="machine-config-daemon" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:38:50.615335 2112 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b97e7fe5-fe52-4769-bb52-fc233e05c05e" containerName="oauth-proxy" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.615340 2112 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e7fe5-fe52-4769-bb52-fc233e05c05e" containerName="oauth-proxy" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.615376 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="b97e7fe5-fe52-4769-bb52-fc233e05c05e" containerName="oauth-proxy" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.615384 2112 memory_manager.go:345] "RemoveStaleState removing state" podUID="b97e7fe5-fe52-4769-bb52-fc233e05c05e" containerName="machine-config-daemon" Feb 23 17:38:50 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podff7777c7_a1dc_413e_8da1_c4ba07527037.slice. Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.742997 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scnpz\" (UniqueName: \"kubernetes.io/projected/ff7777c7-a1dc-413e-8da1-c4ba07527037-kube-api-access-scnpz\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.743199 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff7777c7-a1dc-413e-8da1-c4ba07527037-rootfs\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.743249 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-cookie-secret\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.743279 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-proxy-tls\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.844467 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-scnpz\" (UniqueName: \"kubernetes.io/projected/ff7777c7-a1dc-413e-8da1-c4ba07527037-kube-api-access-scnpz\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.844525 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff7777c7-a1dc-413e-8da1-c4ba07527037-rootfs\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.844554 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-cookie-secret\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.844594 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-proxy-tls\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.844727 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff7777c7-a1dc-413e-8da1-c4ba07527037-rootfs\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.847003 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-proxy-tls\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.847096 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-cookie-secret\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.865011 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-scnpz\" (UniqueName: \"kubernetes.io/projected/ff7777c7-a1dc-413e-8da1-c4ba07527037-kube-api-access-scnpz\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.929003 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.929440056Z" level=info msg="Running pod sandbox: openshift-machine-config-operator/machine-config-daemon-2fx68/POD" id=d551e35f-3c7d-4ae0-b839-02bdbff186bd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.929503058Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.947197019Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=d551e35f-3c7d-4ae0-b839-02bdbff186bd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:38:50.949838 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff7777c7_a1dc_413e_8da1_c4ba07527037.slice/crio-422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5.scope WatchSource:0}: Error finding container 422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5: Status 404 returned error can't find the container with id 422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5 Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.951108736Z" level=info msg="Ran pod sandbox 422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5 with infra container: openshift-machine-config-operator/machine-config-daemon-2fx68/POD" id=d551e35f-3c7d-4ae0-b839-02bdbff186bd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.951872998Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96" id=94027a3b-a53d-4d66-96a3-c826f9fee367 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.952012074Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96 not found" id=94027a3b-a53d-4d66-96a3-c826f9fee367 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:38:50 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:50.952316 2112 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:38:50 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:50.952534467Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96" id=e97e299c-6f46-40d8-922e-f382ce223b03 name=/runtime.v1.ImageService/PullImage Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: run-netns-7bede96e\x2d8124\x2d429b\x2d8d3d\x2db223763195c1.mount: Succeeded. Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: run-netns-7bede96e\x2d8124\x2d429b\x2d8d3d\x2db223763195c1.mount: Consumed 0 CPU time Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809-userdata-shm.mount: Succeeded. Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm29j2.mount: Succeeded. Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm29j2.mount: Consumed 0 CPU time Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7esecret-cookie\x2dsecret.mount: Succeeded. Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7esecret-cookie\x2dsecret.mount: Consumed 0 CPU time Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7esecret-proxy\x2dtls.mount: Succeeded. Feb 23 17:38:51 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-b97e7fe5\x2dfe52\x2d4769\x2dbb52\x2dfc233e05c05e-volumes-kubernetes.io\x7esecret-proxy\x2dtls.mount: Consumed 0 CPU time Feb 23 17:38:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:51.036952487Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96\"" Feb 23 17:38:51 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:51.271198 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fx68" event=&{ID:ff7777c7-a1dc-413e-8da1-c4ba07527037 Type:ContainerStarted Data:422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5} Feb 23 17:38:51 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:51.826006069Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96\"" Feb 23 17:38:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:52.120104 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b97e7fe5-fe52-4769-bb52-fc233e05c05e path="/var/lib/kubelet/pods/b97e7fe5-fe52-4769-bb52-fc233e05c05e/volumes" Feb 23 17:38:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:52.459475975Z" level=info msg="Stopping pod sandbox: 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809" id=79e66008-86c5-4535-83e0-6f106e059c9f name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:38:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:52.459513285Z" level=info msg="Stopped pod sandbox (already stopped): 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809" id=79e66008-86c5-4535-83e0-6f106e059c9f name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:38:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:52.459785199Z" level=info msg="Removing pod sandbox: 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809" id=66681789-ac62-411a-b434-e79b5ddb9209 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:38:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:52.468494910Z" level=info msg="Removed pod sandbox: 948cce2c73b0d657a2a521a1c008eb170053d4110ed7e8bd0838b09026647809" id=66681789-ac62-411a-b434-e79b5ddb9209 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:38:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:38:52.469694 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3\": container with ID starting with 69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3 not found: ID does not exist" containerID="69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3" Feb 23 17:38:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:52.469725 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3" err="rpc error: code = NotFound desc = could not find container \"69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3\": container with ID starting with 69f9cd415e4e9bca49ba6ec5503ea9fe66e652868e3a2bde2c26833f37e1b7c3 not found: ID does not exist" Feb 23 17:38:52 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:38:52.469941 2112 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4\": container with ID starting with 532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4 not found: ID does not exist" containerID="532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4" Feb 23 17:38:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:52.469960 2112 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4" err="rpc error: code = NotFound desc = could not find container \"532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4\": container with ID starting with 532060bd7464ded47ac6c220512d7ce160f4d1930ce612a7fec9bd1c797c50c4 not found: ID does not exist" Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.042700500Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96" id=e97e299c-6f46-40d8-922e-f382ce223b03 name=/runtime.v1.ImageService/PullImage Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.043456568Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96" id=25ae434d-d47d-4f29-b620-e864b4d5a1d4 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.044634522Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8b7f57210bc9d9819a65365f893f7ec8fdaf17b52ffa1d38172094a5a6fe4c7d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96],Size_:540802166,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=25ae434d-d47d-4f29-b620-e864b4d5a1d4 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.045238998Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-2fx68/machine-config-daemon" id=083c599b-7f2d-42ab-89d2-b4ad7f475345 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.045316896Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:38:56 ip-10-0-136-68 systemd[1]: Started crio-conmon-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope. Feb 23 17:38:56 ip-10-0-136-68 systemd[1]: Started libcontainer container 42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126. Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.157879499Z" level=info msg="Created container 42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126: openshift-machine-config-operator/machine-config-daemon-2fx68/machine-config-daemon" id=083c599b-7f2d-42ab-89d2-b4ad7f475345 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.158251157Z" level=info msg="Starting container: 42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126" id=4c0ebe7b-6446-4c09-9e3c-4a03afe1e891 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.166169189Z" level=info msg="Started container" PID=79932 containerID=42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126 description=openshift-machine-config-operator/machine-config-daemon-2fx68/machine-config-daemon id=4c0ebe7b-6446-4c09-9e3c-4a03afe1e891 name=/runtime.v1.RuntimeService/StartContainer sandboxID=422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5 Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.178382456Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=8097a3ac-c5be-4ad7-8e7d-e0ce94e754c1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.178545852Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c1d577960d1c46e90165da215c04054d71634cb8701ebd504e510368ee7bd65,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad],Size_:366055841,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=8097a3ac-c5be-4ad7-8e7d-e0ce94e754c1 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.179793382Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=94bdea46-57f4-4a10-9375-80d23c605f2b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.179967227Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c1d577960d1c46e90165da215c04054d71634cb8701ebd504e510368ee7bd65,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad],Size_:366055841,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=94bdea46-57f4-4a10-9375-80d23c605f2b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.180537297Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-2fx68/oauth-proxy" id=b6918f87-d524-4994-ae42-04b3b20608b4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.180628624Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:38:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00480|connmgr|INFO|br-int<->unix#1484: 93 flow_mods in the 20 s starting 27 s ago (25 adds, 26 deletes, 42 modifications) Feb 23 17:38:56 ip-10-0-136-68 systemd[1]: Started crio-conmon-9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300.scope. Feb 23 17:38:56 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 17:38:56 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:56.284340 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fx68" event=&{ID:ff7777c7-a1dc-413e-8da1-c4ba07527037 Type:ContainerStarted Data:42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126} Feb 23 17:38:56 ip-10-0-136-68 rpm-ostree[79979]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 17:38:56 ip-10-0-136-68 rpm-ostree[79979]: In idle state; will auto-exit in 60 seconds Feb 23 17:38:56 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 17:38:56 ip-10-0-136-68 systemd[1]: Started libcontainer container 9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300. Feb 23 17:38:56 ip-10-0-136-68 rpm-ostree[79979]: client(id:cli dbus:1.586 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) added; new total=1 Feb 23 17:38:56 ip-10-0-136-68 rpm-ostree[79979]: client(id:cli dbus:1.586 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) vanished; remaining=0 Feb 23 17:38:56 ip-10-0-136-68 rpm-ostree[79979]: In idle state; will auto-exit in 61 seconds Feb 23 17:38:56 ip-10-0-136-68 root[79995]: machine-config-daemon[79932]: Starting to manage node: ip-10-0-136-68.us-west-2.compute.internal Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.394199865Z" level=info msg="Created container 9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300: openshift-machine-config-operator/machine-config-daemon-2fx68/oauth-proxy" id=b6918f87-d524-4994-ae42-04b3b20608b4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.395112263Z" level=info msg="Starting container: 9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300" id=90d7ca1f-28d5-40d8-ad12-54c6433d00db name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:38:56 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:38:56.404049017Z" level=info msg="Started container" PID=79989 containerID=9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300 description=openshift-machine-config-operator/machine-config-daemon-2fx68/oauth-proxy id=90d7ca1f-28d5-40d8-ad12-54c6433d00db name=/runtime.v1.RuntimeService/StartContainer sandboxID=422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5 Feb 23 17:38:56 ip-10-0-136-68 rpm-ostree[79979]: client(id:machine-config-operator dbus:1.587 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) added; new total=1 Feb 23 17:38:56 ip-10-0-136-68 rpm-ostree[79979]: client(id:machine-config-operator dbus:1.587 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) vanished; remaining=0 Feb 23 17:38:56 ip-10-0-136-68 rpm-ostree[79979]: In idle state; will auto-exit in 60 seconds Feb 23 17:38:57 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:38:57.287757 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fx68" event=&{ID:ff7777c7-a1dc-413e-8da1-c4ba07527037 Type:ContainerStarted Data:9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300} Feb 23 17:38:57 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00481|connmgr|INFO|br-ex<->unix#1862: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:38:57 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00482|connmgr|INFO|br-ex<->unix#1865: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:38:57 ip-10-0-136-68 rpm-ostree[79979]: client(id:machine-config-operator dbus:1.588 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) added; new total=1 Feb 23 17:38:58 ip-10-0-136-68 rpm-ostree[79979]: Locked sysroot Feb 23 17:38:58 ip-10-0-136-68 rpm-ostree[79979]: Initiated txn Cleanup for client(id:machine-config-operator dbus:1.588 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0): /org/projectatomic/rpmostree1/rhcos Feb 23 17:38:58 ip-10-0-136-68 rpm-ostree[79979]: Process [pid: 80044 uid: 0 unit: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope] connected to transaction progress Feb 23 17:38:58 ip-10-0-136-68 rpm-ostree[79979]: Txn Cleanup on /org/projectatomic/rpmostree1/rhcos successful Feb 23 17:38:58 ip-10-0-136-68 rpm-ostree[79979]: Unlocked sysroot Feb 23 17:38:58 ip-10-0-136-68 rpm-ostree[79979]: Process [pid: 80044 uid: 0 unit: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope] disconnected from transaction progress Feb 23 17:38:58 ip-10-0-136-68 rpm-ostree[79979]: client(id:machine-config-operator dbus:1.588 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) vanished; remaining=0 Feb 23 17:38:58 ip-10-0-136-68 rpm-ostree[79979]: In idle state; will auto-exit in 61 seconds Feb 23 17:38:58 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 17:38:59 ip-10-0-136-68 rpm-ostree[79979]: client(id:machine-config-operator dbus:1.589 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) added; new total=1 Feb 23 17:38:59 ip-10-0-136-68 rpm-ostree[79979]: client(id:machine-config-operator dbus:1.589 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) vanished; remaining=0 Feb 23 17:38:59 ip-10-0-136-68 rpm-ostree[79979]: In idle state; will auto-exit in 60 seconds Feb 23 17:38:59 ip-10-0-136-68 root[80075]: machine-config-daemon[79932]: Validated on-disk state Feb 23 17:39:12 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00483|connmgr|INFO|br-ex<->unix#1874: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.314860 2112 kubelet.go:2119] "SyncLoop ADD" source="api" pods=[openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug] Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.314916 2112 topology_manager.go:205] "Topology Admit Handler" Feb 23 17:39:25 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-besteffort-poda8dcf077_9e75_42c1_8989_b5e8a05f8712.slice. Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.476947 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8dcf077-9e75-42c1-8989-b5e8a05f8712-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\") " pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.476996 2112 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4kq8\" (UniqueName: \"kubernetes.io/projected/a8dcf077-9e75-42c1-8989-b5e8a05f8712-kube-api-access-f4kq8\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\") " pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.577510 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kube-api-access-f4kq8\" (UniqueName: \"kubernetes.io/projected/a8dcf077-9e75-42c1-8989-b5e8a05f8712-kube-api-access-f4kq8\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\") " pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.577584 2112 reconciler.go:269] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8dcf077-9e75-42c1-8989-b5e8a05f8712-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\") " pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.577703 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8dcf077-9e75-42c1-8989-b5e8a05f8712-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\") " pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.592800 2112 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4kq8\" (UniqueName: \"kubernetes.io/projected/a8dcf077-9e75-42c1-8989-b5e8a05f8712-kube-api-access-f4kq8\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\") " pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:25.628425 2112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 17:39:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:25.628875414Z" level=info msg="Running pod sandbox: openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug/POD" id=90a4ca56-1e95-42f7-925b-5ad9d4dc674d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:39:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:25.628936315Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:39:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:25.644463175Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=90a4ca56-1e95-42f7-925b-5ad9d4dc674d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:39:25 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:39:25.648329 2112 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8dcf077_9e75_42c1_8989_b5e8a05f8712.slice/crio-b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651.scope WatchSource:0}: Error finding container b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651: Status 404 returned error can't find the container with id b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651 Feb 23 17:39:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:25.650807300Z" level=info msg="Ran pod sandbox b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651 with infra container: openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug/POD" id=90a4ca56-1e95-42f7-925b-5ad9d4dc674d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:39:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:25.651615733Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12" id=377f3f2b-341e-4868-8cc0-7e6db7aa8204 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:39:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:25.651921065Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12 not found" id=377f3f2b-341e-4868-8cc0-7e6db7aa8204 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:39:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:25.652301956Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12" id=1ab7ef27-cbbe-4147-8fa6-f60fddd6582b name=/runtime.v1.ImageService/PullImage Feb 23 17:39:25 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:25.718972881Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12\"" Feb 23 17:39:26 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:26.341637 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:a8dcf077-9e75-42c1-8989-b5e8a05f8712 Type:ContainerStarted Data:b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651} Feb 23 17:39:26 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:26.864059574Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12\"" Feb 23 17:39:27 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00484|connmgr|INFO|br-ex<->unix#1878: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:39:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:33.805181294Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12" id=1ab7ef27-cbbe-4147-8fa6-f60fddd6582b name=/runtime.v1.ImageService/PullImage Feb 23 17:39:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:33.805994548Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12" id=c2083d56-30f8-4ac7-82c5-dfb4e2527549 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:39:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:33.807461801Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:25563a58e011c8f5e5ce0ad0855a11a739335cfafef29c46935ce1be3de8dd03,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12],Size_:792105820,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=c2083d56-30f8-4ac7-82c5-dfb4e2527549 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:39:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:33.808127894Z" level=info msg="Creating container: openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=866f9016-d76f-4f3e-b8a8-033d332ebdb9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:39:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:33.808218933Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:39:33 ip-10-0-136-68 systemd[1]: Started crio-conmon-0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81.scope. Feb 23 17:39:33 ip-10-0-136-68 systemd[1]: Started libcontainer container 0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81. Feb 23 17:39:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:33.917125918Z" level=info msg="Created container 0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81: openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=866f9016-d76f-4f3e-b8a8-033d332ebdb9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:39:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:33.917553770Z" level=info msg="Starting container: 0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81" id=40ea9a1d-e299-4814-b141-510e86fd731f name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:39:33 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:33.938002897Z" level=info msg="Started container" PID=80456 containerID=0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81 description=openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug/container-00 id=40ea9a1d-e299-4814-b141-510e86fd731f name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651 Feb 23 17:39:33 ip-10-0-136-68 rpm-ostree[79979]: client(id:cli dbus:1.593 unit:crio-0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81.scope uid:0) added; new total=1 Feb 23 17:39:33 ip-10-0-136-68 rpm-ostree[79979]: client(id:cli dbus:1.593 unit:crio-0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81.scope uid:0) vanished; remaining=0 Feb 23 17:39:33 ip-10-0-136-68 rpm-ostree[79979]: In idle state; will auto-exit in 64 seconds Feb 23 17:39:33 ip-10-0-136-68 systemd[1]: crio-0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81.scope: Succeeded. Feb 23 17:39:33 ip-10-0-136-68 systemd[1]: crio-0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81.scope: Consumed 44ms CPU time Feb 23 17:39:33 ip-10-0-136-68 systemd[1]: crio-conmon-0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81.scope: Succeeded. Feb 23 17:39:33 ip-10-0-136-68 systemd[1]: crio-conmon-0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81.scope: Consumed 24ms CPU time Feb 23 17:39:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:34.363478 2112 generic.go:296] "Generic (PLEG): container finished" podID=a8dcf077-9e75-42c1-8989-b5e8a05f8712 containerID="0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81" exitCode=0 Feb 23 17:39:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:34.363521 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:a8dcf077-9e75-42c1-8989-b5e8a05f8712 Type:ContainerDied Data:0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81} Feb 23 17:39:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:34.621216 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug] Feb 23 17:39:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:34.623502 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug] Feb 23 17:39:35 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:35.365570724Z" level=info msg="Stopping pod sandbox: b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651" id=3dcea690-7575-4ee4-9cbd-692169c8abfa name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:39:35 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ad87ed7fbe13842b139d53e998b8140bb77b6dbc38f4afcbe3e23ab1e1a8545c-merged.mount: Succeeded. Feb 23 17:39:35 ip-10-0-136-68 systemd[1]: run-utsns-e9c7ff3f\x2dfeef\x2d40cc\x2d8d2b\x2dc9634bddece0.mount: Succeeded. Feb 23 17:39:35 ip-10-0-136-68 systemd[1]: run-ipcns-e9c7ff3f\x2dfeef\x2d40cc\x2d8d2b\x2dc9634bddece0.mount: Succeeded. Feb 23 17:39:35 ip-10-0-136-68 systemd[1]: run-netns-e9c7ff3f\x2dfeef\x2d40cc\x2d8d2b\x2dc9634bddece0.mount: Succeeded. Feb 23 17:39:35 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:39:35.413725285Z" level=info msg="Stopped pod sandbox: b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651" id=3dcea690-7575-4ee4-9cbd-692169c8abfa name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:39:35 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:35.547043 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4kq8\" (UniqueName: \"kubernetes.io/projected/a8dcf077-9e75-42c1-8989-b5e8a05f8712-kube-api-access-f4kq8\") pod \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\" (UID: \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\") " Feb 23 17:39:35 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:35.547103 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8dcf077-9e75-42c1-8989-b5e8a05f8712-host\") pod \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\" (UID: \"a8dcf077-9e75-42c1-8989-b5e8a05f8712\") " Feb 23 17:39:35 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:35.547217 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8dcf077-9e75-42c1-8989-b5e8a05f8712-host" (OuterVolumeSpecName: "host") pod "a8dcf077-9e75-42c1-8989-b5e8a05f8712" (UID: "a8dcf077-9e75-42c1-8989-b5e8a05f8712"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 17:39:35 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a8dcf077\x2d9e75\x2d42c1\x2d8989\x2db5e8a05f8712-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df4kq8.mount: Succeeded. Feb 23 17:39:35 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:35.555067 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8dcf077-9e75-42c1-8989-b5e8a05f8712-kube-api-access-f4kq8" (OuterVolumeSpecName: "kube-api-access-f4kq8") pod "a8dcf077-9e75-42c1-8989-b5e8a05f8712" (UID: "a8dcf077-9e75-42c1-8989-b5e8a05f8712"). InnerVolumeSpecName "kube-api-access-f4kq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:39:35 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:35.647506 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-f4kq8\" (UniqueName: \"kubernetes.io/projected/a8dcf077-9e75-42c1-8989-b5e8a05f8712-kube-api-access-f4kq8\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:39:35 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:35.647536 2112 reconciler.go:399] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8dcf077-9e75-42c1-8989-b5e8a05f8712-host\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:39:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:36.120744 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a8dcf077-9e75-42c1-8989-b5e8a05f8712 path="/var/lib/kubelet/pods/a8dcf077-9e75-42c1-8989-b5e8a05f8712/volumes" Feb 23 17:39:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:36.121028 2112 kubelet_getters.go:306] "Path does not exist" path="/var/lib/kubelet/pods/a8dcf077-9e75-42c1-8989-b5e8a05f8712/volumes" Feb 23 17:39:36 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-besteffort-poda8dcf077_9e75_42c1_8989_b5e8a05f8712.slice. Feb 23 17:39:36 ip-10-0-136-68 systemd[1]: kubepods-besteffort-poda8dcf077_9e75_42c1_8989_b5e8a05f8712.slice: Consumed 69ms CPU time Feb 23 17:39:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:36.368240 2112 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651" Feb 23 17:39:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:36.368743 2112 status_manager.go:652] "Status for pod is up-to-date; skipping" podUID=a8dcf077-9e75-42c1-8989-b5e8a05f8712 Feb 23 17:39:36 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:39:36.371050 2112 status_manager.go:652] "Status for pod is up-to-date; skipping" podUID=a8dcf077-9e75-42c1-8989-b5e8a05f8712 Feb 23 17:39:42 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00485|connmgr|INFO|br-ex<->unix#1887: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:39:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00486|connmgr|INFO|br-int<->unix#1484: 95 flow_mods in the 28 s starting 58 s ago (27 adds, 26 deletes, 42 modifications) Feb 23 17:39:57 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00487|connmgr|INFO|br-ex<->unix#1891: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:10 ip-10-0-136-68 rpm-ostree[79979]: client(id:machine-config-operator dbus:1.594 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) added; new total=1 Feb 23 17:40:10 ip-10-0-136-68 rpm-ostree[79979]: client(id:machine-config-operator dbus:1.594 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) vanished; remaining=0 Feb 23 17:40:10 ip-10-0-136-68 rpm-ostree[79979]: In idle state; will auto-exit in 60 seconds Feb 23 17:40:11 ip-10-0-136-68 root[80894]: machine-config-daemon[79932]: Starting update from rendered-worker-897f2f3c67d20d57713bd47f68251b36 to rendered-worker-1e56871b9de773bcdc692bfcd148a34a: &{osUpdate:true kargs:true fips:false passwd:false files:true units:true kernelType:false extensions:false} Feb 23 17:40:11 ip-10-0-136-68 root[80895]: machine-config-daemon[79932]: Update prepared; requesting cordon and drain via annotation to controller Feb 23 17:40:12 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00488|connmgr|INFO|br-ex<->unix#1900: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.038386 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/kube-state-metrics-8d585644b-dckcc] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.038619 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" podUID=4961f202-10a7-460b-8e62-ce7b7dbb8806 containerName="kube-state-metrics" containerID="cri-o://b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.038863 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" podUID=4961f202-10a7-460b-8e62-ce7b7dbb8806 containerName="kube-rbac-proxy-self" containerID="cri-o://359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.038925 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" podUID=4961f202-10a7-460b-8e62-ce7b7dbb8806 containerName="kube-rbac-proxy-main" containerID="cri-o://1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.039236135Z" level=info msg="Stopping container: b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472 (timeout: 30s)" id=519dfb28-d365-4c68-b4f5-e4adf13ad7a3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.039237227Z" level=info msg="Stopping container: 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d (timeout: 30s)" id=2f546842-ddb4-4fbb-8784-6cbd20fb988f name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.039250886Z" level=info msg="Stopping container: 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322 (timeout: 30s)" id=28a844c1-644d-4efe-bb14-4fe0baec2194 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.051529 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.052518 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.052836 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" podUID=aadb02e0-de11-41e9-9dc0-106e1d0fc545 containerName="kube-rbac-proxy-main" containerID="cri-o://f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.052906 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" podUID=aadb02e0-de11-41e9-9dc0-106e1d0fc545 containerName="openshift-state-metrics" containerID="cri-o://329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.052991 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" podUID=aadb02e0-de11-41e9-9dc0-106e1d0fc545 containerName="kube-rbac-proxy-self" containerID="cri-o://d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.053243768Z" level=info msg="Stopping container: d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156 (timeout: 30s)" id=119c36e4-e971-41ec-b600-fdbbc22cfa36 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.053382408Z" level=info msg="Stopping container: f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4 (timeout: 30s)" id=c5eb55e9-faa4-4e28-bdfd-c5665b190cf5 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.053443675Z" level=info msg="Stopping container: 329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c (timeout: 30s)" id=4d50bdf4-b688-4515-a3db-3d185239db41 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.053548 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.053734 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" podUID=0a5a348d-9766-4727-93ec-147703d44b68 containerName="telemeter-client" containerID="cri-o://ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.053983 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" podUID=0a5a348d-9766-4727-93ec-147703d44b68 containerName="kube-rbac-proxy" containerID="cri-o://707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.054045 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" podUID=0a5a348d-9766-4727-93ec-147703d44b68 containerName="reload" containerID="cri-o://e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.054313006Z" level=info msg="Stopping container: e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24 (timeout: 30s)" id=f048a592-21a1-41ca-967a-457809809772 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.054374581Z" level=info msg="Stopping container: ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6 (timeout: 30s)" id=0103f557-56ef-43da-83e2-fd638e9673b2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.054525987Z" level=info msg="Stopping container: 707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6 (timeout: 30s)" id=566bf4d7-cf71-4e92-9863-6e0d2f452d6a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.065637 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.066706 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.068306 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd] Feb 23 17:40:15 ip-10-0-136-68 conmon[57726]: conmon b7acd6c812a55fc3ab31 : container 57771 exited with status 2 Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472.scope: Consumed 24ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 conmon[58277]: conmon 329b52729ecbc1d70f36 : container 58291 exited with status 2 Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472.scope: Consumed 4.009s CPU time Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.088801 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.091807 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ingress/router-default-c776d6877-hc4dc] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.091953 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c.scope: Consumed 262ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c.scope: Consumed 27ms CPU time Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.111062 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" podUID=7952f7cd-30fa-4974-9514-90e64fd0405a containerName="check-endpoints" containerID="cri-o://13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e" gracePeriod=30 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.112861 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-console/downloads-6778bfc749-9tkv8] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.112901 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-console/downloads-6778bfc749-9tkv8] Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.113286 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-console/downloads-6778bfc749-9tkv8" podUID=fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 containerName="download-server" containerID="cri-o://f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508" gracePeriod=1 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.113500 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerName="config-reloader" containerID="cri-o://b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190" gracePeriod=120 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.113614 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" containerID="cri-o://cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73" gracePeriod=3600 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.113847 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerName="alertmanager" containerID="cri-o://2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6" gracePeriod=120 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.113910 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerName="prom-label-proxy" containerID="cri-o://d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97" gracePeriod=120 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.114062 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerName="kube-rbac-proxy-metric" containerID="cri-o://a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4" gracePeriod=120 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.114115 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerName="kube-rbac-proxy" containerID="cri-o://1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551" gracePeriod=120 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.114160 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerName="alertmanager-proxy" containerID="cri-o://8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9" gracePeriod=120 Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.114868896Z" level=info msg="Stopping container: 8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9 (timeout: 120s)" id=b4c99523-a759-4241-bca9-6f25136f530f name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.115024188Z" level=info msg="Stopping container: 13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e (timeout: 30s)" id=fc330238-bdf6-45d2-8721-3683b982c968 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.115124914Z" level=info msg="Stopping container: f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508 (timeout: 1s)" id=7e4017a1-083f-406e-9047-7ff6e7b3cd23 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.115240179Z" level=info msg="Stopping container: b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190 (timeout: 120s)" id=b9e772e3-e3af-40ac-be42-1325fafe8de6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.115324103Z" level=info msg="Stopping container: cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73 (timeout: 3600s)" id=acb35fb9-d1a5-4773-bb20-02ca0b930b3b name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.115411060Z" level=info msg="Stopping container: 2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6 (timeout: 120s)" id=7cb07948-5d15-4e4e-93bc-d6d67dbdddad name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.115498710Z" level=info msg="Stopping container: d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97 (timeout: 120s)" id=b66019f4-6a81-44a5-86fc-4770d4d41df8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.115585460Z" level=info msg="Stopping container: a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4 (timeout: 120s)" id=a7899e42-8908-4fda-a8f7-065414b16988 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.115763856Z" level=info msg="Stopping container: 1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551 (timeout: 120s)" id=d456512b-791a-4dd8-8a53-e603beea02ca name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00489|connmgr|INFO|br-ex<->unix#1903: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:15 ip-10-0-136-68 conmon[57690]: conmon ba7fdd14efd0a1bc6f3c : container 57744 exited with status 2 Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 conmon[57897]: conmon e01522fac3ca6ed1026b : container 57912 exited with status 2 Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6.scope: Consumed 28ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6.scope: Consumed 419ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24.scope: Consumed 27ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24.scope: Consumed 76ms CPU time Feb 23 17:40:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00490|connmgr|INFO|br-ex<->unix#1906: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:15 ip-10-0-136-68 conmon[58727]: conmon 8f5b322a10577d8ccff9 : container 58740 exited with status 2 Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9.scope: Consumed 2.821s CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9.scope: Consumed 25ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e.scope: Consumed 2.219s CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e.scope: Consumed 30ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508.scope: Consumed 2.673s CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508.scope: Consumed 26ms CPU time Feb 23 17:40:15 ip-10-0-136-68 conmon[58368]: conmon b2d6cdee2d1101309ceb : container 58380 exited with status 2 Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190.scope: Consumed 25ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190.scope: Consumed 68ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97.scope: Consumed 27ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551.scope: Consumed 123ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4c23f5fe9a42d8e69769aa0a056df56b7334d586e8088ca53e69a5af31ed0123-merged.mount: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4c23f5fe9a42d8e69769aa0a056df56b7334d586e8088ca53e69a5af31ed0123-merged.mount: Consumed 0 CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97.scope: Consumed 24ms CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551.scope: Consumed 26ms CPU time Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.278900783Z" level=info msg="Stopped container 329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/openshift-state-metrics" id=4d50bdf4-b688-4515-a3db-3d185239db41 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00491|connmgr|INFO|br-ex<->unix#1909: 6 flow_mods in the last 0 s (2 adds, 4 deletes) Feb 23 17:40:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00492|connmgr|INFO|br-ex<->unix#1912: 6 flow_mods in the last 0 s (6 adds) Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ab9391d82d73d5b38bbea9d3377a00702d034786c833f057d8a4308e93832299-merged.mount: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ab9391d82d73d5b38bbea9d3377a00702d034786c833f057d8a4308e93832299-merged.mount: Consumed 0 CPU time Feb 23 17:40:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00493|connmgr|INFO|br-ex<->unix#1915: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0a71a9df713b1f1dccccb828a5ea546f10c8440b167c0bf6ca92f3a988aaef16-merged.mount: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0a71a9df713b1f1dccccb828a5ea546f10c8440b167c0bf6ca92f3a988aaef16-merged.mount: Consumed 0 CPU time Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.414976257Z" level=info msg="Stopped container 13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e: openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr/check-endpoints" id=fc330238-bdf6-45d2-8721-3683b982c968 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00494|connmgr|INFO|br-ex<->unix#1918: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.420951519Z" level=info msg="Stopping pod sandbox: d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33" id=b44f52c9-717c-4da8-a05b-01b23746671d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.421220243Z" level=info msg="Got pod network &{Name:network-check-source-5ff44f4c57-4nhbr Namespace:openshift-network-diagnostics ID:d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33 UID:7952f7cd-30fa-4974-9514-90e64fd0405a NetNS:/var/run/netns/b3efef6f-5f50-4c26-a722-fd874ef1762a Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.421325503Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-source-5ff44f4c57-4nhbr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.426352782Z" level=info msg="Stopped container ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/telemeter-client" id=0103f557-56ef-43da-83e2-fd638e9673b2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a4e295928f6afdbf96789be664af31d8793b301d67e428ea173459a222021826-merged.mount: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a4e295928f6afdbf96789be664af31d8793b301d67e428ea173459a222021826-merged.mount: Consumed 0 CPU time Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.442325077Z" level=info msg="Stopped container b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-state-metrics" id=519dfb28-d365-4c68-b4f5-e4adf13ad7a3 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-bb749969aa0152e2e5a546ed3993f54d65500b5d3b2f33dcc3ddc263a391fd2c-merged.mount: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-bb749969aa0152e2e5a546ed3993f54d65500b5d3b2f33dcc3ddc263a391fd2c-merged.mount: Consumed 0 CPU time Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.460363 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager-proxy/0.log" Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.460733 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/config-reloader/0.log" Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461000 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager/0.log" Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461026 2112 generic.go:296] "Generic (PLEG): container finished" podID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerID="d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97" exitCode=0 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461036 2112 generic.go:296] "Generic (PLEG): container finished" podID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerID="1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551" exitCode=0 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461044 2112 generic.go:296] "Generic (PLEG): container finished" podID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerID="8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9" exitCode=2 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461053 2112 generic.go:296] "Generic (PLEG): container finished" podID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerID="b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190" exitCode=2 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461087 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerDied Data:d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97} Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461104 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerDied Data:1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551} Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461113 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerDied Data:8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9} Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.461123 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerDied Data:b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190} Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.461431451Z" level=info msg="Stopped container 8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=b4c99523-a759-4241-bca9-6f25136f530f name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.462535 2112 generic.go:296] "Generic (PLEG): container finished" podID=aadb02e0-de11-41e9-9dc0-106e1d0fc545 containerID="329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c" exitCode=2 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.462570 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" event=&{ID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 Type:ContainerDied Data:329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c} Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.463403 2112 generic.go:296] "Generic (PLEG): container finished" podID=7952f7cd-30fa-4974-9514-90e64fd0405a containerID="13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e" exitCode=0 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.463427 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" event=&{ID:7952f7cd-30fa-4974-9514-90e64fd0405a Type:ContainerDied Data:13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e} Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.464779 2112 generic.go:296] "Generic (PLEG): container finished" podID=0a5a348d-9766-4727-93ec-147703d44b68 containerID="e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24" exitCode=2 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.464803 2112 generic.go:296] "Generic (PLEG): container finished" podID=0a5a348d-9766-4727-93ec-147703d44b68 containerID="ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6" exitCode=2 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.464845 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" event=&{ID:0a5a348d-9766-4727-93ec-147703d44b68 Type:ContainerDied Data:e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24} Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.464870 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" event=&{ID:0a5a348d-9766-4727-93ec-147703d44b68 Type:ContainerDied Data:ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6} Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.465589 2112 generic.go:296] "Generic (PLEG): container finished" podID=fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 containerID="f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508" exitCode=0 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.466736 2112 generic.go:296] "Generic (PLEG): container finished" podID=4961f202-10a7-460b-8e62-ce7b7dbb8806 containerID="b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472" exitCode=2 Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.466762 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" event=&{ID:4961f202-10a7-460b-8e62-ce7b7dbb8806 Type:ContainerDied Data:b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472} Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.477618511Z" level=info msg="Stopped container e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/reload" id=f048a592-21a1-41ca-967a-457809809772 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.495960467Z" level=info msg="Stopped container f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508: openshift-console/downloads-6778bfc749-9tkv8/download-server" id=7e4017a1-083f-406e-9047-7ff6e7b3cd23 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.496254890Z" level=info msg="Stopping pod sandbox: d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161" id=3de01a36-c6d6-46bb-af89-1ee65b96f19c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.496440942Z" level=info msg="Got pod network &{Name:downloads-6778bfc749-9tkv8 Namespace:openshift-console ID:d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161 UID:fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 NetNS:/var/run/netns/fae7580b-c4e0-4b92-b218-6b2956db817e Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.496546882Z" level=info msg="Deleting pod openshift-console_downloads-6778bfc749-9tkv8 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.509874207Z" level=info msg="Stopped container b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190: openshift-monitoring/alertmanager-main-1/config-reloader" id=b9e772e3-e3af-40ac-be42-1325fafe8de6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.525277440Z" level=info msg="Stopped container d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=b66019f4-6a81-44a5-86fc-4770d4d41df8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.544895428Z" level=info msg="Stopped container 1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=d456512b-791a-4dd8-8a53-e603beea02ca name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00495|bridge|INFO|bridge br-int: deleted interface d3697aea95661cd on port 32 Feb 23 17:40:15 ip-10-0-136-68 kernel: device d3697aea95661cd left promiscuous mode Feb 23 17:40:15 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:15.717814 2112 kubelet_node_status.go:590] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeNotSchedulable" Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6.scope: Consumed 2.186s CPU time Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6.scope: Succeeded. Feb 23 17:40:15 ip-10-0-136-68 systemd[1]: crio-conmon-2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6.scope: Consumed 25ms CPU time Feb 23 17:40:15 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00496|bridge|INFO|bridge br-int: deleted interface d11bfb33540b641 on port 30 Feb 23 17:40:15 ip-10-0-136-68 kernel: device d11bfb33540b641 left promiscuous mode Feb 23 17:40:15 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:15.990053557Z" level=info msg="Stopped container 2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6: openshift-monitoring/alertmanager-main-1/alertmanager" id=7cb07948-5d15-4e4e-93bc-d6d67dbdddad name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: 2023-02-23T17:40:15Z [verbose] Del: openshift-network-diagnostics:network-check-source-5ff44f4c57-4nhbr:7952f7cd-30fa-4974-9514-90e64fd0405a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: I0223 17:40:15.579138 81343 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.120040778Z" level=info msg="Stopped pod sandbox: d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33" id=b44f52c9-717c-4da8-a05b-01b23746671d name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.125944 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=020f3e1e-9ac7-42d0-8b15-bf2ed04169bb path="/var/lib/kubelet/pods/020f3e1e-9ac7-42d0-8b15-bf2ed04169bb/volumes" Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.127252 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9677e3d7-a54b-481d-b0af-680e529ee92d path="/var/lib/kubelet/pods/9677e3d7-a54b-481d-b0af-680e529ee92d/volumes" Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322.scope: Consumed 27ms CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d.scope: Consumed 1.736s CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d.scope: Consumed 26ms CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156.scope: Consumed 490ms CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6.scope: Consumed 548ms CPU time Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.222179356Z" level=info msg="Stopped container 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-self" id=28a844c1-644d-4efe-bb14-4fe0baec2194 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4.scope: Consumed 476ms CPU time Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.232133 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-295pj\" (UniqueName: \"kubernetes.io/projected/7952f7cd-30fa-4974-9514-90e64fd0405a-kube-api-access-295pj\") pod \"7952f7cd-30fa-4974-9514-90e64fd0405a\" (UID: \"7952f7cd-30fa-4974-9514-90e64fd0405a\") " Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4.scope: Consumed 25ms CPU time Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.237383366Z" level=info msg="Stopped container 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-main" id=2f546842-ddb4-4fbb-8784-6cbd20fb988f name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.237917169Z" level=info msg="Stopping pod sandbox: a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c" id=fe9d958d-a43b-407b-b044-d70619d7dfe1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.238235369Z" level=info msg="Got pod network &{Name:kube-state-metrics-8d585644b-dckcc Namespace:openshift-monitoring ID:a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c UID:4961f202-10a7-460b-8e62-ce7b7dbb8806 NetNS:/var/run/netns/f629175d-e7a3-4679-aa09-96e74c78cb04 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.238409627Z" level=info msg="Deleting pod openshift-monitoring_kube-state-metrics-8d585644b-dckcc from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.245197 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7952f7cd-30fa-4974-9514-90e64fd0405a-kube-api-access-295pj" (OuterVolumeSpecName: "kube-api-access-295pj") pod "7952f7cd-30fa-4974-9514-90e64fd0405a" (UID: "7952f7cd-30fa-4974-9514-90e64fd0405a"). InnerVolumeSpecName "kube-api-access-295pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156.scope: Consumed 30ms CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322.scope: Consumed 511ms CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-11045a0aa9edb13c865f2fd16e206fb2c097e556638bf8761e7ba0aa36cfc628-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-11045a0aa9edb13c865f2fd16e206fb2c097e556638bf8761e7ba0aa36cfc628-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: run-netns-b3efef6f\x2d5f50\x2d4c26\x2da722\x2dfd874ef1762a.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: run-netns-b3efef6f\x2d5f50\x2d4c26\x2da722\x2dfd874ef1762a.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: run-ipcns-b3efef6f\x2d5f50\x2d4c26\x2da722\x2dfd874ef1762a.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: run-ipcns-b3efef6f\x2d5f50\x2d4c26\x2da722\x2dfd874ef1762a.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: run-utsns-b3efef6f\x2d5f50\x2d4c26\x2da722\x2dfd874ef1762a.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: run-utsns-b3efef6f\x2d5f50\x2d4c26\x2da722\x2dfd874ef1762a.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33-userdata-shm.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7952f7cd\x2d30fa\x2d4974\x2d9514\x2d90e64fd0405a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d295pj.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7952f7cd\x2d30fa\x2d4974\x2d9514\x2d90e64fd0405a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d295pj.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0bcc651f862d7e68f69e83b1fe81a311bef78057370b6c611938ee7c62d14641-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0bcc651f862d7e68f69e83b1fe81a311bef78057370b6c611938ee7c62d14641-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-08b977b73dc96f5700750fd43cac5ac52e09da5c814db2f97222dc7a815333ea-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-08b977b73dc96f5700750fd43cac5ac52e09da5c814db2f97222dc7a815333ea-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-932911bc39a62ce41aeacc466ad28174cd33b910646d35b30a67706e76cb394f-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-932911bc39a62ce41aeacc466ad28174cd33b910646d35b30a67706e76cb394f-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7b8aeab2c778bbd659e57fb49e18c552a71cf19248e564cb967a365817cb8315-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7b8aeab2c778bbd659e57fb49e18c552a71cf19248e564cb967a365817cb8315-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1f92e9d51667347a9f17eb908c0fdaba3ae7d986042c945deac4b5a7564b51f8-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1f92e9d51667347a9f17eb908c0fdaba3ae7d986042c945deac4b5a7564b51f8-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1562e34f04206f5633fc1a8cc1f9b88f63ab6fa90649c6c493e16c1a1dc2a6be-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1562e34f04206f5633fc1a8cc1f9b88f63ab6fa90649c6c493e16c1a1dc2a6be-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-77280e6d041e5a53d218c86b3bd70f9f5c29ecb51ef40cbf8faedff3fefd21b6-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-77280e6d041e5a53d218c86b3bd70f9f5c29ecb51ef40cbf8faedff3fefd21b6-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3cb900835dcdcc43fa80e4e92a4a1fb935da6572b91294e8a32167a71598fb7a-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3cb900835dcdcc43fa80e4e92a4a1fb935da6572b91294e8a32167a71598fb7a-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-28e814f3e6eb0806357eb7ee4eae0f680ff4308fa8f5334ffeaf9c43ecdb2401-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-28e814f3e6eb0806357eb7ee4eae0f680ff4308fa8f5334ffeaf9c43ecdb2401-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d8b8b4c9e33257db6ec8b48454c63d14f9a3df293511e17bf4e0553ef4884892-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d8b8b4c9e33257db6ec8b48454c63d14f9a3df293511e17bf4e0553ef4884892-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.332610 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-295pj\" (UniqueName: \"kubernetes.io/projected/7952f7cd-30fa-4974-9514-90e64fd0405a-kube-api-access-295pj\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4.scope: Consumed 495ms CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4.scope: Consumed 23ms CPU time Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8277fa461ddbadbfb31944fab8bb2ffcd091224f05aaec3667476ae6cf8d5cd2-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8277fa461ddbadbfb31944fab8bb2ffcd091224f05aaec3667476ae6cf8d5cd2-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.408176996Z" level=info msg="Stopped container a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=a7899e42-8908-4fda-a8f7-065414b16988 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.408611854Z" level=info msg="Stopping pod sandbox: 201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842" id=dbfd8fe3-de7a-4639-ba61-c46aeb139185 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.408876565Z" level=info msg="Got pod network &{Name:alertmanager-main-1 Namespace:openshift-monitoring ID:201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842 UID:39a23baf-fee4-4b3a-839f-6c0452a117b2 NetNS:/var/run/netns/8f093b03-dba4-401e-9302-36e7bb0b2da3 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.409017125Z" level=info msg="Deleting pod openshift-monitoring_alertmanager-main-1 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.474349 2112 generic.go:296] "Generic (PLEG): container finished" podID=aadb02e0-de11-41e9-9dc0-106e1d0fc545 containerID="d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156" exitCode=0 Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.474379 2112 generic.go:296] "Generic (PLEG): container finished" podID=aadb02e0-de11-41e9-9dc0-106e1d0fc545 containerID="f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4" exitCode=0 Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.474432 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" event=&{ID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 Type:ContainerDied Data:d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156} Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.474457 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" event=&{ID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 Type:ContainerDied Data:f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4} Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.477013 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager-proxy/0.log" Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.477341 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/config-reloader/0.log" Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.478966 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager/0.log" Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.479006 2112 generic.go:296] "Generic (PLEG): container finished" podID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerID="2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6" exitCode=0 Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.479021 2112 generic.go:296] "Generic (PLEG): container finished" podID=39a23baf-fee4-4b3a-839f-6c0452a117b2 containerID="a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4" exitCode=0 Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.479070 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerDied Data:2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6} Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.479089 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerDied Data:a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4} Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.479108 2112 scope.go:115] "RemoveContainer" containerID="6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c" Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.480135222Z" level=info msg="Removing container: 6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c" id=fd10f0e3-3ce6-455b-ada3-468fa3d2f703 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.483336 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr" event=&{ID:7952f7cd-30fa-4974-9514-90e64fd0405a Type:ContainerDied Data:d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33} Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.488627921Z" level=info msg="Stopped container f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-main" id=c5eb55e9-faa4-4e28-bdfd-c5665b190cf5 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.489004168Z" level=info msg="Stopped container d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-self" id=119c36e4-e971-41ec-b600-fdbbc22cfa36 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6.scope: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: crio-conmon-707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6.scope: Consumed 30ms CPU time Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.502109837Z" level=info msg="Stopping pod sandbox: e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8" id=8cf7765b-383e-4657-b6bd-310db509ca1e name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.502978568Z" level=info msg="Got pod network &{Name:openshift-state-metrics-7df79db5c7-2clx8 Namespace:openshift-monitoring ID:e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8 UID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 NetNS:/var/run/netns/94ffd883-9806-4091-b7f1-c2e08049ae3b Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.503134330Z" level=info msg="Deleting pod openshift-monitoring_openshift-state-metrics-7df79db5c7-2clx8 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod7952f7cd_30fa_4974_9514_90e64fd0405a.slice. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod7952f7cd_30fa_4974_9514_90e64fd0405a.slice: Consumed 2.249s CPU time Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.513650 2112 generic.go:296] "Generic (PLEG): container finished" podID=4961f202-10a7-460b-8e62-ce7b7dbb8806 containerID="359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322" exitCode=0 Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.513800 2112 generic.go:296] "Generic (PLEG): container finished" podID=4961f202-10a7-460b-8e62-ce7b7dbb8806 containerID="1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d" exitCode=0 Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.513825 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" event=&{ID:4961f202-10a7-460b-8e62-ce7b7dbb8806 Type:ContainerDied Data:359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322} Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.513847 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" event=&{ID:4961f202-10a7-460b-8e62-ce7b7dbb8806 Type:ContainerDied Data:1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d} Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.516378 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr] Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.523033 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr] Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-dc3343a58106bbb86290379d45743d2f9ce3cb1b57abfe82972a22c6b4fd4997-merged.mount: Succeeded. Feb 23 17:40:16 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-dc3343a58106bbb86290379d45743d2f9ce3cb1b57abfe82972a22c6b4fd4997-merged.mount: Consumed 0 CPU time Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.588036203Z" level=info msg="Removed container 6bb5a5ead71a329476ae68f1d149309024efa4071bcb27a62913e5b8ab90675c: openshift-monitoring/alertmanager-main-1/alertmanager" id=fd10f0e3-3ce6-455b-ada3-468fa3d2f703 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.588309 2112 scope.go:115] "RemoveContainer" containerID="13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e" Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.589280390Z" level=info msg="Removing container: 13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e" id=2f297421-4460-4f35-8fac-bfd4d4f6d838 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00497|bridge|INFO|bridge br-int: deleted interface a935a237826dd4a on port 25 Feb 23 17:40:16 ip-10-0-136-68 kernel: device a935a237826dd4a left promiscuous mode Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: 2023-02-23T17:40:15Z [error] SetNetworkStatus: failed to query the pod downloads-6778bfc749-9tkv8 in out of cluster comm: pods "downloads-6778bfc749-9tkv8" not found Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: 2023-02-23T17:40:15Z [error] Multus: error unsetting the networks status: SetNetworkStatus: failed to query the pod downloads-6778bfc749-9tkv8 in out of cluster comm: pods "downloads-6778bfc749-9tkv8" not found Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: 2023-02-23T17:40:15Z [verbose] Del: openshift-console:downloads-6778bfc749-9tkv8:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: I0223 17:40:15.651165 81357 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.753039389Z" level=info msg="Removed container 13b50e95ba7acee28832e992c331d7564264429a4a3c5680afcc7e73be8a3d5e: openshift-network-diagnostics/network-check-source-5ff44f4c57-4nhbr/check-endpoints" id=2f297421-4460-4f35-8fac-bfd4d4f6d838 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.832715813Z" level=info msg="Stopped pod sandbox: d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161" id=3de01a36-c6d6-46bb-af89-1ee65b96f19c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.941857 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4md7q\" (UniqueName: \"kubernetes.io/projected/fc25d4db-ca44-4b9b-b5f1-c0bed3abd500-kube-api-access-4md7q\") pod \"fc25d4db-ca44-4b9b-b5f1-c0bed3abd500\" (UID: \"fc25d4db-ca44-4b9b-b5f1-c0bed3abd500\") " Feb 23 17:40:16 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:16.955897 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc25d4db-ca44-4b9b-b5f1-c0bed3abd500-kube-api-access-4md7q" (OuterVolumeSpecName: "kube-api-access-4md7q") pod "fc25d4db-ca44-4b9b-b5f1-c0bed3abd500" (UID: "fc25d4db-ca44-4b9b-b5f1-c0bed3abd500"). InnerVolumeSpecName "kube-api-access-4md7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.972412567Z" level=info msg="Stopped container 707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/kube-rbac-proxy" id=566bf4d7-cf71-4e92-9863-6e0d2f452d6a name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.972793414Z" level=info msg="Stopping pod sandbox: 6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf" id=93f8c443-ba2d-49ea-b052-b5fb391ee85f name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.973018619Z" level=info msg="Got pod network &{Name:telemeter-client-5df7cd6cd7-cpr6n Namespace:openshift-monitoring ID:6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf UID:0a5a348d-9766-4727-93ec-147703d44b68 NetNS:/var/run/netns/404137e4-2392-4ff4-9680-48f7cffed564 Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:40:16 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:16.973163634Z" level=info msg="Deleting pod openshift-monitoring_telemeter-client-5df7cd6cd7-cpr6n from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.042973 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-4md7q\" (UniqueName: \"kubernetes.io/projected/fc25d4db-ca44-4b9b-b5f1-c0bed3abd500-kube-api-access-4md7q\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00498|bridge|INFO|bridge br-int: deleted interface 201d0ba9a16d3f2 on port 27 Feb 23 17:40:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00499|bridge|INFO|bridge br-int: deleted interface e14bdaaa341fca1 on port 24 Feb 23 17:40:17 ip-10-0-136-68 crio[2062]: 2023-02-23T17:40:16Z [verbose] Del: openshift-monitoring:kube-state-metrics-8d585644b-dckcc:4961f202-10a7-460b-8e62-ce7b7dbb8806:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:40:17 ip-10-0-136-68 crio[2062]: I0223 17:40:16.453063 81527 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:40:17 ip-10-0-136-68 kernel: device 201d0ba9a16d3f2 left promiscuous mode Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-02e824a376bd516c6aa91bdfa4154fc15fd68dc2ca9d5d107ad59fadc51d2b48-merged.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-02e824a376bd516c6aa91bdfa4154fc15fd68dc2ca9d5d107ad59fadc51d2b48-merged.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-netns-fae7580b\x2dc4e0\x2d4b92\x2db218\x2d6b2956db817e.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-netns-fae7580b\x2dc4e0\x2d4b92\x2db218\x2d6b2956db817e.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-ipcns-fae7580b\x2dc4e0\x2d4b92\x2db218\x2d6b2956db817e.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-ipcns-fae7580b\x2dc4e0\x2d4b92\x2db218\x2d6b2956db817e.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-utsns-fae7580b\x2dc4e0\x2d4b92\x2db218\x2d6b2956db817e.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-utsns-fae7580b\x2dc4e0\x2d4b92\x2db218\x2d6b2956db817e.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161-userdata-shm.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-fc25d4db\x2dca44\x2d4b9b\x2db5f1\x2dc0bed3abd500-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4md7q.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-fc25d4db\x2dca44\x2d4b9b\x2db5f1\x2dc0bed3abd500-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4md7q.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-76575330c9a5936c122a522763ee221cc056aed35ed79d372f10596be65d6ad0-merged.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-76575330c9a5936c122a522763ee221cc056aed35ed79d372f10596be65d6ad0-merged.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f63cb90b25c541de658f937beb033bcf9b6bcdcc3125282198a8b50145075c35-merged.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f63cb90b25c541de658f937beb033bcf9b6bcdcc3125282198a8b50145075c35-merged.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-utsns-f629175d\x2de7a3\x2d4679\x2daa09\x2d96e74c78cb04.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-utsns-f629175d\x2de7a3\x2d4679\x2daa09\x2d96e74c78cb04.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-ipcns-f629175d\x2de7a3\x2d4679\x2daa09\x2d96e74c78cb04.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-ipcns-f629175d\x2de7a3\x2d4679\x2daa09\x2d96e74c78cb04.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-netns-f629175d\x2de7a3\x2d4679\x2daa09\x2d96e74c78cb04.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-netns-f629175d\x2de7a3\x2d4679\x2daa09\x2d96e74c78cb04.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:17.302735940Z" level=info msg="Stopped pod sandbox: a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c" id=fe9d958d-a43b-407b-b044-d70619d7dfe1 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c-userdata-shm.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.344411 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t274d\" (UniqueName: \"kubernetes.io/projected/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-api-access-t274d\") pod \"4961f202-10a7-460b-8e62-ce7b7dbb8806\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.344456 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-tls\") pod \"4961f202-10a7-460b-8e62-ce7b7dbb8806\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.344481 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4961f202-10a7-460b-8e62-ce7b7dbb8806-metrics-client-ca\") pod \"4961f202-10a7-460b-8e62-ce7b7dbb8806\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.344525 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4961f202-10a7-460b-8e62-ce7b7dbb8806-volume-directive-shadow\") pod \"4961f202-10a7-460b-8e62-ce7b7dbb8806\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.344575 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-kube-rbac-proxy-config\") pod \"4961f202-10a7-460b-8e62-ce7b7dbb8806\" (UID: \"4961f202-10a7-460b-8e62-ce7b7dbb8806\") " Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:17.344744 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4961f202-10a7-460b-8e62-ce7b7dbb8806/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.344997 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4961f202-10a7-460b-8e62-ce7b7dbb8806-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "4961f202-10a7-460b-8e62-ce7b7dbb8806" (UID: "4961f202-10a7-460b-8e62-ce7b7dbb8806"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:17.345060 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4961f202-10a7-460b-8e62-ce7b7dbb8806/volumes/kubernetes.io~empty-dir/volume-directive-shadow: clearQuota called, but quotas disabled Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.345088 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4961f202-10a7-460b-8e62-ce7b7dbb8806-volume-directive-shadow" (OuterVolumeSpecName: "volume-directive-shadow") pod "4961f202-10a7-460b-8e62-ce7b7dbb8806" (UID: "4961f202-10a7-460b-8e62-ce7b7dbb8806"). InnerVolumeSpecName "volume-directive-shadow". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4961f202\x2d10a7\x2d460b\x2d8e62\x2dce7b7dbb8806-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt274d.mount: Succeeded. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4961f202\x2d10a7\x2d460b\x2d8e62\x2dce7b7dbb8806-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt274d.mount: Consumed 0 CPU time Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.361056 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-tls" (OuterVolumeSpecName: "kube-state-metrics-tls") pod "4961f202-10a7-460b-8e62-ce7b7dbb8806" (UID: "4961f202-10a7-460b-8e62-ce7b7dbb8806"). InnerVolumeSpecName "kube-state-metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.363862 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-api-access-t274d" (OuterVolumeSpecName: "kube-api-access-t274d") pod "4961f202-10a7-460b-8e62-ce7b7dbb8806" (UID: "4961f202-10a7-460b-8e62-ce7b7dbb8806"). InnerVolumeSpecName "kube-api-access-t274d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.367849 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-kube-rbac-proxy-config" (OuterVolumeSpecName: "kube-state-metrics-kube-rbac-proxy-config") pod "4961f202-10a7-460b-8e62-ce7b7dbb8806" (UID: "4961f202-10a7-460b-8e62-ce7b7dbb8806"). InnerVolumeSpecName "kube-state-metrics-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:17 ip-10-0-136-68 kernel: device e14bdaaa341fca1 left promiscuous mode Feb 23 17:40:17 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00500|bridge|INFO|bridge br-int: deleted interface 6cf6d0bcdfd8248 on port 23 Feb 23 17:40:17 ip-10-0-136-68 kernel: device 6cf6d0bcdfd8248 left promiscuous mode Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.445882 2112 reconciler.go:399] "Volume detached for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4961f202-10a7-460b-8e62-ce7b7dbb8806-volume-directive-shadow\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.445919 2112 reconciler.go:399] "Volume detached for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-kube-rbac-proxy-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.445936 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-t274d\" (UniqueName: \"kubernetes.io/projected/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-api-access-t274d\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.445952 2112 reconciler.go:399] "Volume detached for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4961f202-10a7-460b-8e62-ce7b7dbb8806-kube-state-metrics-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.445968 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4961f202-10a7-460b-8e62-ce7b7dbb8806-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.518082 2112 generic.go:296] "Generic (PLEG): container finished" podID=0a5a348d-9766-4727-93ec-147703d44b68 containerID="707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6" exitCode=0 Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.518139 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" event=&{ID:0a5a348d-9766-4727-93ec-147703d44b68 Type:ContainerDied Data:707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6} Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.520006 2112 scope.go:115] "RemoveContainer" containerID="f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.526733 2112 status_manager.go:652] "Status for pod is up-to-date; skipping" podUID=fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 Feb 23 17:40:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:17.528110112Z" level=info msg="Removing container: f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508" id=8c81e7bd-a947-4812-b85f-265af60c5592 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.529568 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" event=&{ID:4961f202-10a7-460b-8e62-ce7b7dbb8806 Type:ContainerDied Data:a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c} Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.537459 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager-proxy/0.log" Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.537824 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/config-reloader/0.log" Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod4961f202_10a7_460b_8e62_ce7b7dbb8806.slice. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod4961f202_10a7_460b_8e62_ce7b7dbb8806.slice: Consumed 6.336s CPU time Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podfc25d4db_ca44_4b9b_b5f1_c0bed3abd500.slice. Feb 23 17:40:17 ip-10-0-136-68 systemd[1]: kubepods-burstable-podfc25d4db_ca44_4b9b_b5f1_c0bed3abd500.slice: Consumed 2.699s CPU time Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.630719 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/kube-state-metrics-8d585644b-dckcc] Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.682550 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/kube-state-metrics-8d585644b-dckcc] Feb 23 17:40:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:17.684947 2112 status_manager.go:652] "Status for pod is up-to-date; skipping" podUID=fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 Feb 23 17:40:17 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:17.745156008Z" level=warning msg="Found defunct process with PID 62857 (haproxy)" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.087432156Z" level=info msg="Removed container f61bfedd3366631c6437a0ba7abfedacbc4b136b27a24b01b61888781896b508: openshift-console/downloads-6778bfc749-9tkv8/download-server" id=8c81e7bd-a947-4812-b85f-265af60c5592 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.087784 2112 scope.go:115] "RemoveContainer" containerID="359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.088606775Z" level=info msg="Removing container: 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322" id=600210d5-6923-4121-b3e5-ef33359c6a29 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.110880893Z" level=info msg="Removed container 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-self" id=600210d5-6923-4121-b3e5-ef33359c6a29 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.111203 2112 scope.go:115] "RemoveContainer" containerID="1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.112044537Z" level=info msg="Removing container: 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d" id=82c41373-41c2-42d9-9db0-ad7186720877 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.119073 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" podUID=4961f202-10a7-460b-8e62-ce7b7dbb8806 containerName="kube-rbac-proxy-main" containerID="cri-o://1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d" gracePeriod=1 Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.119734 2112 kuberuntime_container.go:702] "Killing container with a grace period" pod="openshift-monitoring/kube-state-metrics-8d585644b-dckcc" podUID=4961f202-10a7-460b-8e62-ce7b7dbb8806 containerName="kube-rbac-proxy-self" containerID="cri-o://359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322" gracePeriod=1 Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.120069351Z" level=info msg="Stopping container: 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322 (timeout: 1s)" id=e8ad21c5-fc0b-4d84-b419-c24678d4beef name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.120271652Z" level=info msg="Stopping container: 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d (timeout: 1s)" id=651b2b45-2b23-4fa4-b93f-6b2d7140ae15 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.120304580Z" level=info msg="Stopping pod sandbox: d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161" id=103ee1eb-664c-4b32-a803-09ef21c69165 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.120320889Z" level=info msg="Stopped pod sandbox (already stopped): d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161" id=103ee1eb-664c-4b32-a803-09ef21c69165 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.120351873Z" level=info msg="Stopping pod sandbox: d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33" id=7940dd55-b01f-4448-9e70-4455a58451e6 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.120367489Z" level=info msg="Stopped pod sandbox (already stopped): d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33" id=7940dd55-b01f-4448-9e70-4455a58451e6 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:40:18.121804 2112 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322\": container with ID starting with 359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322 not found: ID does not exist" containerID="359a6f1fe61e3722f81bba3adcb849c0f9fb6b9f2876735a50f0d8272d645322" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.123211 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4961f202-10a7-460b-8e62-ce7b7dbb8806 path="/var/lib/kubelet/pods/4961f202-10a7-460b-8e62-ce7b7dbb8806/volumes" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.124711 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7952f7cd-30fa-4974-9514-90e64fd0405a path="/var/lib/kubelet/pods/7952f7cd-30fa-4974-9514-90e64fd0405a/volumes" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.125125 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fc25d4db-ca44-4b9b-b5f1-c0bed3abd500 path="/var/lib/kubelet/pods/fc25d4db-ca44-4b9b-b5f1-c0bed3abd500/volumes" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.135823271Z" level=info msg="Stopped container 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-main" id=651b2b45-2b23-4fa4-b93f-6b2d7140ae15 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.136090213Z" level=info msg="Stopping pod sandbox: a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c" id=600ca427-d717-4af8-bfa4-fc86f100d213 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.136122033Z" level=info msg="Stopped pod sandbox (already stopped): a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c" id=600ca427-d717-4af8-bfa4-fc86f100d213 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.138200318Z" level=info msg="Removed container 1fbcb5d0be8f137c33fd65ae51ff0104762ac4cd8f6e9e64f372bfb97f038f7d: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-rbac-proxy-main" id=82c41373-41c2-42d9-9db0-ad7186720877 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.138515 2112 scope.go:115] "RemoveContainer" containerID="b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.139268020Z" level=info msg="Removing container: b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472" id=13b4f38c-45ff-4ed9-9286-8d26a1a32932 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.163092696Z" level=info msg="Removed container b7acd6c812a55fc3ab3146988c9369e690b314651c361ef6c6ff01d3d175f472: openshift-monitoring/kube-state-metrics-8d585644b-dckcc/kube-state-metrics" id=13b4f38c-45ff-4ed9-9286-8d26a1a32932 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4961f202\x2d10a7\x2d460b\x2d8e62\x2dce7b7dbb8806-volumes-kubernetes.io\x7esecret-kube\x2dstate\x2dmetrics\x2dtls.mount: Succeeded. Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4961f202\x2d10a7\x2d460b\x2d8e62\x2dce7b7dbb8806-volumes-kubernetes.io\x7esecret-kube\x2dstate\x2dmetrics\x2dtls.mount: Consumed 0 CPU time Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4961f202\x2d10a7\x2d460b\x2d8e62\x2dce7b7dbb8806-volumes-kubernetes.io\x7esecret-kube\x2dstate\x2dmetrics\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Succeeded. Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4961f202\x2d10a7\x2d460b\x2d8e62\x2dce7b7dbb8806-volumes-kubernetes.io\x7esecret-kube\x2dstate\x2dmetrics\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Consumed 0 CPU time Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: 2023-02-23T17:40:16Z [verbose] Del: openshift-monitoring:alertmanager-main-1:39a23baf-fee4-4b3a-839f-6c0452a117b2:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: I0223 17:40:16.789743 81580 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-db4bbf1fad6f7092d9401e9177784dc1ee02df16a6d8de14c1e91cb41f4dd7a6-merged.mount: Succeeded. Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-db4bbf1fad6f7092d9401e9177784dc1ee02df16a6d8de14c1e91cb41f4dd7a6-merged.mount: Consumed 0 CPU time Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: 2023-02-23T17:40:16Z [verbose] Del: openshift-monitoring:openshift-state-metrics-7df79db5c7-2clx8:aadb02e0-de11-41e9-9dc0-106e1d0fc545:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: I0223 17:40:16.784839 81613 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: run-utsns-8f093b03\x2ddba4\x2d401e\x2d9302\x2d36e7bb0b2da3.mount: Succeeded. Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: run-utsns-8f093b03\x2ddba4\x2d401e\x2d9302\x2d36e7bb0b2da3.mount: Consumed 0 CPU time Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-586a3c7afc18e2237962d7a441364adc33f3aaee9567222ec51f40341d43557f-merged.mount: Succeeded. Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-586a3c7afc18e2237962d7a441364adc33f3aaee9567222ec51f40341d43557f-merged.mount: Consumed 0 CPU time Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: run-ipcns-8f093b03\x2ddba4\x2d401e\x2d9302\x2d36e7bb0b2da3.mount: Succeeded. Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: run-ipcns-8f093b03\x2ddba4\x2d401e\x2d9302\x2d36e7bb0b2da3.mount: Consumed 0 CPU time Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.344729426Z" level=info msg="Stopped pod sandbox: 201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842" id=dbfd8fe3-de7a-4639-ba61-c46aeb139185 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.352946 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager-proxy/0.log" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.353249 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/config-reloader/0.log" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.363717424Z" level=info msg="Stopped pod sandbox: e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8" id=8cf7765b-383e-4657-b6bd-310db509ca1e name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00001|ovs_rcu(urcu5)|WARN|blocked 1000 ms waiting for main to quiesce Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463725 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-main-db\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463783 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-kube-rbac-proxy-config\") pod \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463811 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aadb02e0-de11-41e9-9dc0-106e1d0fc545-metrics-client-ca\") pod \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463836 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-tls\") pod \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463862 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-metrics-client-ca\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463893 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-proxy\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463923 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-out\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463949 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.463980 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-web-config\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.464009 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-tls-assets\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.464040 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-trusted-ca-bundle\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.464070 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-volume\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.464106 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy-metric\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.464136 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlv6w\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-kube-api-access-mlv6w\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.464166 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-tls\") pod \"39a23baf-fee4-4b3a-839f-6c0452a117b2\" (UID: \"39a23baf-fee4-4b3a-839f-6c0452a117b2\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.464196 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz7zg\" (UniqueName: \"kubernetes.io/projected/aadb02e0-de11-41e9-9dc0-106e1d0fc545-kube-api-access-tz7zg\") pod \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\" (UID: \"aadb02e0-de11-41e9-9dc0-106e1d0fc545\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:18.464824 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/39a23baf-fee4-4b3a-839f-6c0452a117b2/volumes/kubernetes.io~empty-dir/alertmanager-main-db: clearQuota called, but quotas disabled Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.464935 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:18.466543 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/aadb02e0-de11-41e9-9dc0-106e1d0fc545/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:18.466993 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/39a23baf-fee4-4b3a-839f-6c0452a117b2/volumes/kubernetes.io~configmap/alertmanager-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.467277 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aadb02e0-de11-41e9-9dc0-106e1d0fc545-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "aadb02e0-de11-41e9-9dc0-106e1d0fc545" (UID: "aadb02e0-de11-41e9-9dc0-106e1d0fc545"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:18.467457 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/39a23baf-fee4-4b3a-839f-6c0452a117b2/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.467588 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.467893 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.475997 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aadb02e0-de11-41e9-9dc0-106e1d0fc545-kube-api-access-tz7zg" (OuterVolumeSpecName: "kube-api-access-tz7zg") pod "aadb02e0-de11-41e9-9dc0-106e1d0fc545" (UID: "aadb02e0-de11-41e9-9dc0-106e1d0fc545"). InnerVolumeSpecName "kube-api-access-tz7zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.478897 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-kube-rbac-proxy-config" (OuterVolumeSpecName: "openshift-state-metrics-kube-rbac-proxy-config") pod "aadb02e0-de11-41e9-9dc0-106e1d0fc545" (UID: "aadb02e0-de11-41e9-9dc0-106e1d0fc545"). InnerVolumeSpecName "openshift-state-metrics-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.480940 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-kube-api-access-mlv6w" (OuterVolumeSpecName: "kube-api-access-mlv6w") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "kube-api-access-mlv6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.483857 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.485841 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.486959 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-tls" (OuterVolumeSpecName: "openshift-state-metrics-tls") pod "aadb02e0-de11-41e9-9dc0-106e1d0fc545" (UID: "aadb02e0-de11-41e9-9dc0-106e1d0fc545"). InnerVolumeSpecName "openshift-state-metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.488859 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-out" (OuterVolumeSpecName: "config-out") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.488859 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.493875 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.498843 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.504848 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-proxy" (OuterVolumeSpecName: "secret-alertmanager-main-proxy") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "secret-alertmanager-main-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.509809 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-web-config" (OuterVolumeSpecName: "web-config") pod "39a23baf-fee4-4b3a-839f-6c0452a117b2" (UID: "39a23baf-fee4-4b3a-839f-6c0452a117b2"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.540843 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8" event=&{ID:aadb02e0-de11-41e9-9dc0-106e1d0fc545 Type:ContainerDied Data:e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8} Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.540876 2112 scope.go:115] "RemoveContainer" containerID="329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.542082640Z" level=info msg="Removing container: 329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c" id=98976016-c909-4e79-99a0-edf6b6622c18 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.546935 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/alertmanager-proxy/0.log" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.548039 2112 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-1_39a23baf-fee4-4b3a-839f-6c0452a117b2/config-reloader/0.log" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.548104 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event=&{ID:39a23baf-fee4-4b3a-839f-6c0452a117b2 Type:ContainerDied Data:201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842} Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-podaadb02e0_de11_41e9_9dc0_106e1d0fc545.slice. Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: kubepods-burstable-podaadb02e0_de11_41e9_9dc0_106e1d0fc545.slice: Consumed 1.313s CPU time Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564598 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564631 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564647 2112 reconciler.go:399] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-out\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564769 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564785 2112 reconciler.go:399] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-web-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564797 2112 reconciler.go:399] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-tls-assets\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564814 2112 reconciler.go:399] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564827 2112 reconciler.go:399] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-config-volume\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564843 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564859 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-mlv6w\" (UniqueName: \"kubernetes.io/projected/39a23baf-fee4-4b3a-839f-6c0452a117b2-kube-api-access-mlv6w\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564874 2112 reconciler.go:399] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/39a23baf-fee4-4b3a-839f-6c0452a117b2-secret-alertmanager-main-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564890 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-tz7zg\" (UniqueName: \"kubernetes.io/projected/aadb02e0-de11-41e9-9dc0-106e1d0fc545-kube-api-access-tz7zg\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564904 2112 reconciler.go:399] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/39a23baf-fee4-4b3a-839f-6c0452a117b2-alertmanager-main-db\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564921 2112 reconciler.go:399] "Volume detached for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-kube-rbac-proxy-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564936 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aadb02e0-de11-41e9-9dc0-106e1d0fc545-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.564952 2112 reconciler.go:399] "Volume detached for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/aadb02e0-de11-41e9-9dc0-106e1d0fc545-openshift-state-metrics-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod39a23baf_fee4_4b3a_839f_6c0452a117b2.slice. Feb 23 17:40:18 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod39a23baf_fee4_4b3a_839f_6c0452a117b2.slice: Consumed 6.008s CPU time Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.575603235Z" level=info msg="Removed container 329b52729ecbc1d70f36f97051984466ea00d33a05ba330cca8d0ef02aeaa83c: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/openshift-state-metrics" id=98976016-c909-4e79-99a0-edf6b6622c18 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.576465 2112 scope.go:115] "RemoveContainer" containerID="d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156" Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00501|timeval|WARN|Unreasonably long 1176ms poll interval (2ms user, 2ms system) Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00502|timeval|WARN|context switches: 40 voluntary, 9 involuntary Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00503|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=4256c5ac: Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00504|coverage|INFO|bridge_reconfigure 0.6/sec 0.050/sec 0.0322/sec total: 189 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00505|coverage|INFO|ofproto_flush 0.0/sec 0.000/sec 0.0000/sec total: 3 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00506|coverage|INFO|ofproto_packet_out 0.8/sec 0.883/sec 0.4697/sec total: 1798 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00507|coverage|INFO|ofproto_recv_openflow 55.0/sec 7.550/sec 5.4381/sec total: 24191 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00508|coverage|INFO|ofproto_update_port 0.4/sec 0.033/sec 0.0197/sec total: 160 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00509|coverage|INFO|rev_reconfigure 0.4/sec 0.033/sec 0.0094/sec total: 55 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00510|coverage|INFO|rev_port_toggled 0.0/sec 0.000/sec 0.0006/sec total: 13 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00511|coverage|INFO|rev_flow_table 2.0/sec 0.317/sec 0.2819/sec total: 1143 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00512|coverage|INFO|rev_mac_learning 0.0/sec 0.017/sec 0.0175/sec total: 87 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00513|coverage|INFO|handler_duplicate_upcall 0.0/sec 0.700/sec 1.1253/sec total: 4737 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00514|coverage|INFO|upcall_ukey_replace 0.0/sec 0.000/sec 0.0039/sec total: 17 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00515|coverage|INFO|xlate_actions 859.6/sec 122.617/sec 131.1750/sec total: 531733 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00516|coverage|INFO|ccmap_expand 0.0/sec 0.000/sec 0.0083/sec total: 125 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00517|coverage|INFO|ccmap_shrink 0.8/sec 0.067/sec 0.0469/sec total: 169 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00518|coverage|INFO|cmap_expand 5.4/sec 0.450/sec 0.4642/sec total: 3609 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00519|coverage|INFO|cmap_shrink 6.2/sec 0.517/sec 0.4836/sec total: 3043 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00520|coverage|INFO|dpif_port_add 0.0/sec 0.000/sec 0.0044/sec total: 35 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00521|coverage|INFO|dpif_port_del 0.8/sec 0.067/sec 0.0100/sec total: 48 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00522|coverage|INFO|dpif_flow_flush 0.0/sec 0.000/sec 0.0000/sec total: 3 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00523|coverage|INFO|dpif_flow_get 0.0/sec 0.000/sec 0.0000/sec total: 24 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00524|coverage|INFO|dpif_flow_put 25.4/sec 9.167/sec 12.8833/sec total: 54355 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00525|coverage|INFO|dpif_flow_del 13.2/sec 7.867/sec 11.7861/sec total: 49353 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00526|coverage|INFO|dpif_execute 11.0/sec 10.400/sec 14.9461/sec total: 62643 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00527|coverage|INFO|dpif_execute_with_help 1.0/sec 1.050/sec 0.6894/sec total: 2731 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00528|coverage|INFO|dpif_meter_set 0.0/sec 0.000/sec 0.0047/sec total: 22 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00529|coverage|INFO|dpif_meter_del 0.0/sec 0.000/sec 0.0044/sec total: 17 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00530|coverage|INFO|flow_extract 10.6/sec 9.367/sec 14.2731/sec total: 60037 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00531|coverage|INFO|miniflow_malloc 96.0/sec 10.433/sec 7.6131/sec total: 36062 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00532|coverage|INFO|hindex_pathological 0.0/sec 0.000/sec 0.0000/sec total: 16 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00533|coverage|INFO|hindex_expand 0.0/sec 0.000/sec 0.0000/sec total: 10 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00534|coverage|INFO|hmap_pathological 2.8/sec 0.400/sec 0.3994/sec total: 1596 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00535|coverage|INFO|hmap_expand 248.0/sec 78.000/sec 61.8869/sec total: 253852 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00536|coverage|INFO|mac_learning_learned 0.0/sec 0.000/sec 0.0000/sec total: 4 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00537|coverage|INFO|mac_learning_expired 0.0/sec 0.000/sec 0.0000/sec total: 2 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00538|coverage|INFO|mac_learning_moved 0.0/sec 0.033/sec 0.0356/sec total: 154 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00539|coverage|INFO|netdev_get_stats 0.0/sec 4.600/sec 4.5711/sec total: 18629 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00540|coverage|INFO|txn_unchanged 0.8/sec 0.133/sec 0.1303/sec total: 624 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00541|coverage|INFO|txn_incomplete 0.4/sec 0.233/sec 0.2178/sec total: 949 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00542|coverage|INFO|txn_success 0.2/sec 0.200/sec 0.2031/sec total: 857 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00543|coverage|INFO|poll_create_node 1497.0/sec 455.350/sec 376.5133/sec total: 1540273 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00544|coverage|INFO|poll_zero_timeout 25.4/sec 10.267/sec 14.2872/sec total: 60227 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00545|coverage|INFO|rconn_queued 35.4/sec 6.133/sec 3.5703/sec total: 14333 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00546|coverage|INFO|rconn_sent 35.4/sec 6.133/sec 3.5703/sec total: 14333 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00547|coverage|INFO|seq_change 2463.6/sec 1244.267/sec 1214.1933/sec total: 4982961 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00548|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0000/sec total: 9 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00549|coverage|INFO|stream_open 0.0/sec 0.000/sec 0.0000/sec total: 1 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00550|coverage|INFO|unixctl_received 0.0/sec 0.100/sec 0.1025/sec total: 417 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00551|coverage|INFO|unixctl_replied 0.0/sec 0.100/sec 0.1025/sec total: 417 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00552|coverage|INFO|util_xalloc 29376.2/sec 5626.150/sec 4154.2022/sec total: 17088383 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00553|coverage|INFO|vconn_received 58.6/sec 8.183/sec 5.8100/sec total: 25693 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00554|coverage|INFO|vconn_sent 45.0/sec 7.617/sec 4.3689/sec total: 17587 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00555|coverage|INFO|netdev_set_policing 0.0/sec 0.000/sec 0.0089/sec total: 74 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00556|coverage|INFO|netdev_get_ifindex 3.0/sec 0.317/sec 0.2036/sec total: 1054 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00557|coverage|INFO|netdev_set_hwaddr 0.0/sec 0.000/sec 0.0000/sec total: 3 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00558|coverage|INFO|netdev_get_ethtool 0.0/sec 0.000/sec 0.0178/sec total: 126 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00559|coverage|INFO|netdev_set_ethtool 0.0/sec 0.000/sec 0.0044/sec total: 31 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00560|coverage|INFO|netlink_received 100.0/sec 38.800/sec 47.8122/sec total: 203052 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00561|coverage|INFO|netlink_recv_jumbo 6.8/sec 5.817/sec 6.4739/sec total: 26616 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00562|coverage|INFO|netlink_sent 116.8/sec 44.633/sec 56.2708/sec total: 238287 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00563|coverage|INFO|nln_changed 1.6/sec 0.133/sec 0.0928/sec total: 598 Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00564|coverage|INFO|90 events never hit Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.610778990Z" level=info msg="Removing container: d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156" id=2dab68c6-a40a-46fa-a331-2a9b9f244fe4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.647067 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8] Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.653193 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8] Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00565|connmgr|INFO|br-ex<->unix#1921: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: 2023-02-23T17:40:17Z [verbose] Del: openshift-monitoring:telemeter-client-5df7cd6cd7-cpr6n:0a5a348d-9766-4727-93ec-147703d44b68:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: I0223 17:40:17.185019 81648 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.693711961Z" level=info msg="Removed container d50b5cde1b096262a82aecc0358953f52a2290611f4f0a0dafe882b6090c0156: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-self" id=2dab68c6-a40a-46fa-a331-2a9b9f244fe4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.695213 2112 scope.go:115] "RemoveContainer" containerID="f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.699747054Z" level=info msg="Removing container: f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4" id=06be9027-d54d-48a6-8a57-c1713df493dd name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00566|connmgr|INFO|br-ex<->unix#1924: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: E0223 17:40:18.730885 2112 cadvisor_stats_provider.go:457] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a23baf_fee4_4b3a_839f_6c0452a117b2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a23baf_fee4_4b3a_839f_6c0452a117b2.slice/crio-b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a23baf_fee4_4b3a_839f_6c0452a117b2.slice/crio-d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97.scope\": RecentStats: unable to find data in memory cache]" Feb 23 17:40:18 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00567|connmgr|INFO|br-ex<->unix#1927: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.748256410Z" level=info msg="Removed container f90c0ac09fa2b841c60cad6b187e9c061e245aa16aa90e30e8276641fad26ba4: openshift-monitoring/openshift-state-metrics-7df79db5c7-2clx8/kube-rbac-proxy-main" id=06be9027-d54d-48a6-8a57-c1713df493dd name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.748573 2112 scope.go:115] "RemoveContainer" containerID="2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.749554021Z" level=info msg="Removing container: 2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6" id=c27d7904-0dd4-4c59-8b95-0e7d3ef28f58 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.756790 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.764090 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/alertmanager-main-1] Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.774719765Z" level=info msg="Removed container 2e4abab8d13d7e39e5418d4f58e3fdae38b48dfa958876304cb146184b957ab6: openshift-monitoring/alertmanager-main-1/alertmanager" id=c27d7904-0dd4-4c59-8b95-0e7d3ef28f58 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.774937 2112 scope.go:115] "RemoveContainer" containerID="d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.775712801Z" level=info msg="Stopped pod sandbox: 6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf" id=93f8c443-ba2d-49ea-b052-b5fb391ee85f name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.775747231Z" level=info msg="Removing container: d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97" id=9245a555-44f6-4116-a2cd-91e9ed7a0b0c name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.796430259Z" level=info msg="Removed container d0c87217a99f38154ecebed6eef6b64e725ed2930f8ecb95abb393ca96869e97: openshift-monitoring/alertmanager-main-1/prom-label-proxy" id=9245a555-44f6-4116-a2cd-91e9ed7a0b0c name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.796632 2112 scope.go:115] "RemoveContainer" containerID="a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.797251252Z" level=info msg="Removing container: a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4" id=5cf65f1b-4ab6-4e22-973b-2889ac8d9846 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.813393690Z" level=info msg="Removed container a4ea8facc874dcc785d2ac1735b2e0ec11eedc23a9c4dd37f2ea9b1138f8a1b4: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy-metric" id=5cf65f1b-4ab6-4e22-973b-2889ac8d9846 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.813581 2112 scope.go:115] "RemoveContainer" containerID="1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.821670965Z" level=info msg="Removing container: 1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551" id=64bd173c-1a4e-4673-8577-333dd1ac4da8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.840746017Z" level=info msg="Removed container 1054b7333022434798fabbdf1ce9bf51a379e239c0e46fb4d949c68c98bbe551: openshift-monitoring/alertmanager-main-1/kube-rbac-proxy" id=64bd173c-1a4e-4673-8577-333dd1ac4da8 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.841027 2112 scope.go:115] "RemoveContainer" containerID="8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.842029885Z" level=info msg="Removing container: 8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9" id=0622b162-6d13-421a-8b0f-0f85756aa839 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.858353183Z" level=info msg="Removed container 8f5b322a10577d8ccff9095740b8d9e58de49e07103a658f20525af08232f8f9: openshift-monitoring/alertmanager-main-1/alertmanager-proxy" id=0622b162-6d13-421a-8b0f-0f85756aa839 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.858555 2112 scope.go:115] "RemoveContainer" containerID="b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.859229210Z" level=info msg="Removing container: b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190" id=73be5301-ade3-4a2c-ab33-d06485c94a4d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.873019 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client-kube-rbac-proxy-config\") pod \"0a5a348d-9766-4727-93ec-147703d44b68\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.873064 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-client-tls\") pod \"0a5a348d-9766-4727-93ec-147703d44b68\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.873089 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-serving-certs-ca-bundle\") pod \"0a5a348d-9766-4727-93ec-147703d44b68\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.873123 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrfsx\" (UniqueName: \"kubernetes.io/projected/0a5a348d-9766-4727-93ec-147703d44b68-kube-api-access-lrfsx\") pod \"0a5a348d-9766-4727-93ec-147703d44b68\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.873141 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client\") pod \"0a5a348d-9766-4727-93ec-147703d44b68\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.873159 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-metrics-client-ca\") pod \"0a5a348d-9766-4727-93ec-147703d44b68\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.873178 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-trusted-ca-bundle\") pod \"0a5a348d-9766-4727-93ec-147703d44b68\" (UID: \"0a5a348d-9766-4727-93ec-147703d44b68\") " Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:18.873397 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0a5a348d-9766-4727-93ec-147703d44b68/volumes/kubernetes.io~configmap/telemeter-trusted-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.873581 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-trusted-ca-bundle" (OuterVolumeSpecName: "telemeter-trusted-ca-bundle") pod "0a5a348d-9766-4727-93ec-147703d44b68" (UID: "0a5a348d-9766-4727-93ec-147703d44b68"). InnerVolumeSpecName "telemeter-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:18.874230 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0a5a348d-9766-4727-93ec-147703d44b68/volumes/kubernetes.io~configmap/metrics-client-ca: clearQuota called, but quotas disabled Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.874425 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "0a5a348d-9766-4727-93ec-147703d44b68" (UID: "0a5a348d-9766-4727-93ec-147703d44b68"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:40:18.874522 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0a5a348d-9766-4727-93ec-147703d44b68/volumes/kubernetes.io~configmap/serving-certs-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.874780 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-serving-certs-ca-bundle" (OuterVolumeSpecName: "serving-certs-ca-bundle") pod "0a5a348d-9766-4727-93ec-147703d44b68" (UID: "0a5a348d-9766-4727-93ec-147703d44b68"). InnerVolumeSpecName "serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:18.882126745Z" level=info msg="Removed container b2d6cdee2d1101309cebbd135553153f312137c295e3020ca67a1c92fe7b6190: openshift-monitoring/alertmanager-main-1/config-reloader" id=73be5301-ade3-4a2c-ab33-d06485c94a4d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.888829 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client-kube-rbac-proxy-config" (OuterVolumeSpecName: "secret-telemeter-client-kube-rbac-proxy-config") pod "0a5a348d-9766-4727-93ec-147703d44b68" (UID: "0a5a348d-9766-4727-93ec-147703d44b68"). InnerVolumeSpecName "secret-telemeter-client-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.890849 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5a348d-9766-4727-93ec-147703d44b68-kube-api-access-lrfsx" (OuterVolumeSpecName: "kube-api-access-lrfsx") pod "0a5a348d-9766-4727-93ec-147703d44b68" (UID: "0a5a348d-9766-4727-93ec-147703d44b68"). InnerVolumeSpecName "kube-api-access-lrfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.894809 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-client-tls" (OuterVolumeSpecName: "telemeter-client-tls") pod "0a5a348d-9766-4727-93ec-147703d44b68" (UID: "0a5a348d-9766-4727-93ec-147703d44b68"). InnerVolumeSpecName "telemeter-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.899807 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client" (OuterVolumeSpecName: "secret-telemeter-client") pod "0a5a348d-9766-4727-93ec-147703d44b68" (UID: "0a5a348d-9766-4727-93ec-147703d44b68"). InnerVolumeSpecName "secret-telemeter-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.974012 2112 reconciler.go:399] "Volume detached for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client-kube-rbac-proxy-config\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.974061 2112 reconciler.go:399] "Volume detached for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-client-tls\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.974079 2112 reconciler.go:399] "Volume detached for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-serving-certs-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.974095 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-lrfsx\" (UniqueName: \"kubernetes.io/projected/0a5a348d-9766-4727-93ec-147703d44b68-kube-api-access-lrfsx\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.974113 2112 reconciler.go:399] "Volume detached for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/0a5a348d-9766-4727-93ec-147703d44b68-secret-telemeter-client\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.974129 2112 reconciler.go:399] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-metrics-client-ca\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:18 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:18.974144 2112 reconciler.go:399] "Volume detached for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a348d-9766-4727-93ec-147703d44b68-telemeter-trusted-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:19.135993 2112 patch_prober.go:29] interesting pod/router-default-c776d6877-hc4dc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:19.136053 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volume\x2dsubpaths-web\x2dconfig-alertmanager-9.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volume\x2dsubpaths-web\x2dconfig-alertmanager-9.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-netns-8f093b03\x2ddba4\x2d401e\x2d9302\x2d36e7bb0b2da3.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-netns-8f093b03\x2ddba4\x2d401e\x2d9302\x2d36e7bb0b2da3.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842-userdata-shm.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmlv6w.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmlv6w.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7eprojected-tls\x2dassets.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-config\x2dvolume.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-config\x2dvolume.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7eempty\x2ddir-config\x2dout.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7eempty\x2ddir-config\x2dout.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dtls.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dtls.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy\x2dmetric.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dkube\x2drbac\x2dproxy\x2dmetric.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-web\x2dconfig.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dproxy.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-39a23baf\x2dfee4\x2d4b3a\x2d839f\x2d6c0452a117b2-volumes-kubernetes.io\x7esecret-secret\x2dalertmanager\x2dmain\x2dproxy.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6d69ebaf8b0e742267fb7447beb2297957fdd99ca49d761adbc7fb2e5b5746f8-merged.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6d69ebaf8b0e742267fb7447beb2297957fdd99ca49d761adbc7fb2e5b5746f8-merged.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-netns-94ffd883\x2d9806\x2d4091\x2db7f1\x2dc2e08049ae3b.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-netns-94ffd883\x2d9806\x2d4091\x2db7f1\x2dc2e08049ae3b.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-ipcns-94ffd883\x2d9806\x2d4091\x2db7f1\x2dc2e08049ae3b.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-ipcns-94ffd883\x2d9806\x2d4091\x2db7f1\x2dc2e08049ae3b.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-utsns-94ffd883\x2d9806\x2d4091\x2db7f1\x2dc2e08049ae3b.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-utsns-94ffd883\x2d9806\x2d4091\x2db7f1\x2dc2e08049ae3b.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8-userdata-shm.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-netns-404137e4\x2d2392\x2d4ff4\x2d9680\x2d48f7cffed564.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-netns-404137e4\x2d2392\x2d4ff4\x2d9680\x2d48f7cffed564.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-ipcns-404137e4\x2d2392\x2d4ff4\x2d9680\x2d48f7cffed564.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-ipcns-404137e4\x2d2392\x2d4ff4\x2d9680\x2d48f7cffed564.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-utsns-404137e4\x2d2392\x2d4ff4\x2d9680\x2d48f7cffed564.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-utsns-404137e4\x2d2392\x2d4ff4\x2d9680\x2d48f7cffed564.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf-userdata-shm.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0a5a348d\x2d9766\x2d4727\x2d93ec\x2d147703d44b68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlrfsx.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0a5a348d\x2d9766\x2d4727\x2d93ec\x2d147703d44b68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlrfsx.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0a5a348d\x2d9766\x2d4727\x2d93ec\x2d147703d44b68-volumes-kubernetes.io\x7esecret-secret\x2dtelemeter\x2dclient.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0a5a348d\x2d9766\x2d4727\x2d93ec\x2d147703d44b68-volumes-kubernetes.io\x7esecret-secret\x2dtelemeter\x2dclient.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0a5a348d\x2d9766\x2d4727\x2d93ec\x2d147703d44b68-volumes-kubernetes.io\x7esecret-secret\x2dtelemeter\x2dclient\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0a5a348d\x2d9766\x2d4727\x2d93ec\x2d147703d44b68-volumes-kubernetes.io\x7esecret-secret\x2dtelemeter\x2dclient\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0a5a348d\x2d9766\x2d4727\x2d93ec\x2d147703d44b68-volumes-kubernetes.io\x7esecret-telemeter\x2dclient\x2dtls.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0a5a348d\x2d9766\x2d4727\x2d93ec\x2d147703d44b68-volumes-kubernetes.io\x7esecret-telemeter\x2dclient\x2dtls.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-aadb02e0\x2dde11\x2d41e9\x2d9dc0\x2d106e1d0fc545-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtz7zg.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-aadb02e0\x2dde11\x2d41e9\x2d9dc0\x2d106e1d0fc545-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtz7zg.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-aadb02e0\x2dde11\x2d41e9\x2d9dc0\x2d106e1d0fc545-volumes-kubernetes.io\x7esecret-openshift\x2dstate\x2dmetrics\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-aadb02e0\x2dde11\x2d41e9\x2d9dc0\x2d106e1d0fc545-volumes-kubernetes.io\x7esecret-openshift\x2dstate\x2dmetrics\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-aadb02e0\x2dde11\x2d41e9\x2d9dc0\x2d106e1d0fc545-volumes-kubernetes.io\x7esecret-openshift\x2dstate\x2dmetrics\x2dtls.mount: Succeeded. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-aadb02e0\x2dde11\x2d41e9\x2d9dc0\x2d106e1d0fc545-volumes-kubernetes.io\x7esecret-openshift\x2dstate\x2dmetrics\x2dtls.mount: Consumed 0 CPU time Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:19.562925 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n" event=&{ID:0a5a348d-9766-4727-93ec-147703d44b68 Type:ContainerDied Data:6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf} Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:19.562968 2112 scope.go:115] "RemoveContainer" containerID="707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6" Feb 23 17:40:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:19.564483867Z" level=info msg="Removing container: 707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6" id=81ee98ec-f394-48fe-9f2b-f0cd04127793 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod0a5a348d_9766_4727_93ec_147703d44b68.slice. Feb 23 17:40:19 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod0a5a348d_9766_4727_93ec_147703d44b68.slice: Consumed 1.129s CPU time Feb 23 17:40:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:19.590311141Z" level=info msg="Removed container 707cf8b2ba0348a632e68f119e5e2ab8bfae79a6a595b1ad26638fb0dec903d6: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/kube-rbac-proxy" id=81ee98ec-f394-48fe-9f2b-f0cd04127793 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:19.590543 2112 scope.go:115] "RemoveContainer" containerID="e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24" Feb 23 17:40:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:19.595725187Z" level=info msg="Removing container: e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24" id=47d4d179-19c5-489e-8f6b-a5570ed8f303 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:19.628036611Z" level=info msg="Removed container e01522fac3ca6ed1026bb56690190f33c4afdfe1e9dd982e1a8d7093c486dc24: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/reload" id=47d4d179-19c5-489e-8f6b-a5570ed8f303 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:19.628267 2112 scope.go:115] "RemoveContainer" containerID="ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6" Feb 23 17:40:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:19.629014320Z" level=info msg="Removing container: ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6" id=f17a15cd-a018-45b9-ba76-d2770807407d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:19 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:19.648815822Z" level=info msg="Removed container ba7fdd14efd0a1bc6f3cdabf5734ffbff59544ea577ee2b427673408fc7e04d6: openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n/telemeter-client" id=f17a15cd-a018-45b9-ba76-d2770807407d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:19.690740 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n] Feb 23 17:40:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:19.701453 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-monitoring/telemeter-client-5df7cd6cd7-cpr6n] Feb 23 17:40:19 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00568|connmgr|INFO|br-ex<->unix#1930: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:19 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00569|connmgr|INFO|br-ex<->unix#1933: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:20.120391 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0a5a348d-9766-4727-93ec-147703d44b68 path="/var/lib/kubelet/pods/0a5a348d-9766-4727-93ec-147703d44b68/volumes" Feb 23 17:40:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:20.121320 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=39a23baf-fee4-4b3a-839f-6c0452a117b2 path="/var/lib/kubelet/pods/39a23baf-fee4-4b3a-839f-6c0452a117b2/volumes" Feb 23 17:40:20 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:20.122319 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=aadb02e0-de11-41e9-9dc0-106e1d0fc545 path="/var/lib/kubelet/pods/aadb02e0-de11-41e9-9dc0-106e1d0fc545/volumes" Feb 23 17:40:23 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00570|connmgr|INFO|br-ex<->unix#1937: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:23 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00571|connmgr|INFO|br-ex<->unix#1940: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:25 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00572|connmgr|INFO|br-ex<->unix#1943: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:25 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00573|connmgr|INFO|br-ex<->unix#1946: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:25 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00574|connmgr|INFO|br-ex<->unix#1949: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:25 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00575|connmgr|INFO|br-ex<->unix#1952: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00576|connmgr|INFO|br-ex<->unix#1955: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00577|connmgr|INFO|br-ex<->unix#1958: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:29.135689 2112 patch_prober.go:29] interesting pod/router-default-c776d6877-hc4dc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:40:29 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:40:29 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:40:29 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:40:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:29.135737 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:40:39 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:39.136163 2112 patch_prober.go:29] interesting pod/router-default-c776d6877-hc4dc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:40:39 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:40:39 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:40:39 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:40:39 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:39.136220 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:40:39 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:39.136283 2112 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-c776d6877-hc4dc" Feb 23 17:40:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00578|connmgr|INFO|br-ex<->unix#1968: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:49.135507 2112 patch_prober.go:29] interesting pod/router-default-c776d6877-hc4dc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:40:49 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:40:49 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:40:49 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:40:49 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:49.135605 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:40:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:52.475462 2112 scope.go:115] "RemoveContainer" containerID="303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1" Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.476277233Z" level=info msg="Removing container: 303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1" id=22893025-5d92-46db-abb0-5fe1b13d3c41 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:52 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5ef628f01e4e277ee0206944d46065320048ee75c96e708da3fb30f35b15a7ac-merged.mount: Succeeded. Feb 23 17:40:52 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5ef628f01e4e277ee0206944d46065320048ee75c96e708da3fb30f35b15a7ac-merged.mount: Consumed 0 CPU time Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.524550308Z" level=info msg="Removed container 303834e78e9a09538f555b2be457494ea3a3ca3f23cc9e15e84989cf8870c6f1: openshift-operator-lifecycle-manager/collect-profiles-27952890-rh4pd/collect-profiles" id=22893025-5d92-46db-abb0-5fe1b13d3c41 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:52.524837 2112 scope.go:115] "RemoveContainer" containerID="f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a" Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.525598830Z" level=info msg="Removing container: f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a" id=03b96afd-2767-421e-b6a9-f4d866ba2cf5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:52 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ef080f8ef3ffb6981ac883e951697b4961773ae74f3731ad7800a29deda1e647-merged.mount: Succeeded. Feb 23 17:40:52 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ef080f8ef3ffb6981ac883e951697b4961773ae74f3731ad7800a29deda1e647-merged.mount: Consumed 0 CPU time Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.571415135Z" level=info msg="Removed container f02d4b3d8160195247cb12e88628ee384fcd59e6d49d4e20a6ea0ab2962d862a: openshift-operator-lifecycle-manager/collect-profiles-27952875-qw7rw/collect-profiles" id=03b96afd-2767-421e-b6a9-f4d866ba2cf5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.572696327Z" level=info msg="Stopping pod sandbox: 201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842" id=fc35522d-b846-4bdb-9117-3b29f54c6bcc name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.572730253Z" level=info msg="Stopped pod sandbox (already stopped): 201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842" id=fc35522d-b846-4bdb-9117-3b29f54c6bcc name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.572926874Z" level=info msg="Removing pod sandbox: 201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842" id=28ce97a3-bd62-4af6-89a8-35b6ad97412d name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.581147818Z" level=info msg="Removed pod sandbox: 201d0ba9a16d3f244f86f2085033ad6085fe53d6850b186d42c49024054b3842" id=28ce97a3-bd62-4af6-89a8-35b6ad97412d name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.581380456Z" level=info msg="Stopping pod sandbox: a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c" id=f98130cd-3902-41d9-94ed-8ac36ff65e89 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.581414967Z" level=info msg="Stopped pod sandbox (already stopped): a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c" id=f98130cd-3902-41d9-94ed-8ac36ff65e89 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.581602152Z" level=info msg="Removing pod sandbox: a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c" id=6578d6f1-7cfe-4b7d-b64b-52d7c001fc8d name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.589870496Z" level=info msg="Removed pod sandbox: a935a237826dd4a0c291a0d7b4b2d6924e94cc35bd3969e76b0f75cf79426a1c" id=6578d6f1-7cfe-4b7d-b64b-52d7c001fc8d name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.590108906Z" level=info msg="Stopping pod sandbox: 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d" id=403c3ebf-c0df-4151-b6bd-c4378ef10816 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.590133408Z" level=info msg="Stopped pod sandbox (already stopped): 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d" id=403c3ebf-c0df-4151-b6bd-c4378ef10816 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.590314651Z" level=info msg="Removing pod sandbox: 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d" id=8d95a9ad-741a-4fb2-b873-08a538f3c051 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.599527257Z" level=info msg="Removed pod sandbox: 12b4663c669677ee5acc378c9a7d89e80575759d2391e219f2a7cc7636daae5d" id=8d95a9ad-741a-4fb2-b873-08a538f3c051 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.599826018Z" level=info msg="Stopping pod sandbox: 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3" id=906b59b3-2393-4328-837b-2ac79078beb3 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.599854358Z" level=info msg="Stopped pod sandbox (already stopped): 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3" id=906b59b3-2393-4328-837b-2ac79078beb3 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.600081243Z" level=info msg="Removing pod sandbox: 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3" id=47bca5f3-2429-4484-b7ce-207e463739d9 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.607978647Z" level=info msg="Removed pod sandbox: 76b4c6af287a0c9916b553c28dcc45f25062a2f251bac86f961fd51c1b4860e3" id=47bca5f3-2429-4484-b7ce-207e463739d9 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.608176151Z" level=info msg="Stopping pod sandbox: d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161" id=6cf3d6ef-43f1-4c6c-828e-6b43a55ea78c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.608200111Z" level=info msg="Stopped pod sandbox (already stopped): d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161" id=6cf3d6ef-43f1-4c6c-828e-6b43a55ea78c name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.608371489Z" level=info msg="Removing pod sandbox: d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161" id=c59832b1-e2ee-48d2-8d81-546aa276444b name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.616864159Z" level=info msg="Removed pod sandbox: d11bfb33540b6412e1f36575ffa3f01a4537106b914968f1a1f25710d90aa161" id=c59832b1-e2ee-48d2-8d81-546aa276444b name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.617105500Z" level=info msg="Stopping pod sandbox: e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8" id=ca498e50-12a2-4a59-8209-d88afd615dec name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.617130909Z" level=info msg="Stopped pod sandbox (already stopped): e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8" id=ca498e50-12a2-4a59-8209-d88afd615dec name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.617299150Z" level=info msg="Removing pod sandbox: e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8" id=0c3095a5-c126-4216-844a-0666c3520f2c name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.624693823Z" level=info msg="Removed pod sandbox: e14bdaaa341fca1606f9a0c3b7c37d9baae6f7be7df0452c453184abfdc328f8" id=0c3095a5-c126-4216-844a-0666c3520f2c name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.624953900Z" level=info msg="Stopping pod sandbox: d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33" id=5cca5851-61dd-4c28-ba75-0379dbb4e7aa name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.624998439Z" level=info msg="Stopped pod sandbox (already stopped): d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33" id=5cca5851-61dd-4c28-ba75-0379dbb4e7aa name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.625224841Z" level=info msg="Removing pod sandbox: d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33" id=01d434ac-cd99-453e-95fe-8a2c6988bb7e name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.633596386Z" level=info msg="Removed pod sandbox: d3697aea95661cdf7a48775ec994ae0f82fbec2eb65e7995c5a795c4978d6b33" id=01d434ac-cd99-453e-95fe-8a2c6988bb7e name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.633841422Z" level=info msg="Stopping pod sandbox: 6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf" id=765a6e78-38ef-4135-ac68-908fbabf70f8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.633867079Z" level=info msg="Stopped pod sandbox (already stopped): 6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf" id=765a6e78-38ef-4135-ac68-908fbabf70f8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.634073848Z" level=info msg="Removing pod sandbox: 6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf" id=a3495a2d-9283-4396-8fac-07de278ab6a4 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:40:52.642860231Z" level=info msg="Removed pod sandbox: 6cf6d0bcdfd8248c38237257eece74b58cb7a23a7a19d8609c2b0ab53a64a4cf" id=a3495a2d-9283-4396-8fac-07de278ab6a4 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:40:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00579|connmgr|INFO|br-ex<->unix#1972: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:40:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00580|connmgr|INFO|br-int<->unix#1484: 896 flow_mods in the 40 s starting 41 s ago (235 adds, 419 deletes, 242 modifications) Feb 23 17:40:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:59.135711 2112 patch_prober.go:29] interesting pod/router-default-c776d6877-hc4dc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-proxy-http ok Feb 23 17:40:59 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:40:59 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:40:59 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:40:59 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:40:59.135780 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:41:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:09.136000 2112 patch_prober.go:29] interesting pod/router-default-c776d6877-hc4dc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Feb 23 17:41:09 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:41:09 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:41:09 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:41:09 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:09.136058 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:41:09 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:09.747293998Z" level=warning msg="Found defunct process with PID 81827 (haproxy)" Feb 23 17:41:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00581|connmgr|INFO|br-ex<->unix#1981: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:41:11 ip-10-0-136-68 rpm-ostree[79979]: In idle state; will auto-exit in 62 seconds Feb 23 17:41:11 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Succeeded. Feb 23 17:41:11 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Consumed 167ms CPU time Feb 23 17:41:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:19.135365 2112 patch_prober.go:29] interesting pod/router-default-c776d6877-hc4dc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Feb 23 17:41:19 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:41:19 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:41:19 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:41:19 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:19.135416 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:41:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00582|connmgr|INFO|br-ex<->unix#1985: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:41:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:29.135926 2112 patch_prober.go:29] interesting pod/router-default-c776d6877-hc4dc container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Feb 23 17:41:29 ip-10-0-136-68 kubenswrapper[2112]: [+]has-synced ok Feb 23 17:41:29 ip-10-0-136-68 kubenswrapper[2112]: [-]process-running failed: reason withheld Feb 23 17:41:29 ip-10-0-136-68 kubenswrapper[2112]: healthz check failed Feb 23 17:41:29 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:29.135989 2112 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-c776d6877-hc4dc" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerName="router" probeResult=failure output="HTTP probe failed with statuscode: 500" Feb 23 17:41:29 ip-10-0-136-68 systemd[1]: run-runc-9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537-runc.ZKwWSu.mount: Succeeded. Feb 23 17:41:31 ip-10-0-136-68 systemd[1]: crio-cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73.scope: Succeeded. Feb 23 17:41:31 ip-10-0-136-68 systemd[1]: crio-cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73.scope: Consumed 8.016s CPU time Feb 23 17:41:31 ip-10-0-136-68 systemd[1]: crio-conmon-cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73.scope: Succeeded. Feb 23 17:41:31 ip-10-0-136-68 systemd[1]: crio-conmon-cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73.scope: Consumed 28ms CPU time Feb 23 17:41:31 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5dc62a1969d17e1f49d9fe6e0e25dcce3ab33cb0352f0f6d3be8758fe6753ee9-merged.mount: Succeeded. Feb 23 17:41:31 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5dc62a1969d17e1f49d9fe6e0e25dcce3ab33cb0352f0f6d3be8758fe6753ee9-merged.mount: Consumed 0 CPU time Feb 23 17:41:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:31.473729138Z" level=info msg="Stopped container cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73: openshift-ingress/router-default-c776d6877-hc4dc/router" id=acb35fb9-d1a5-4773-bb20-02ca0b930b3b name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:41:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:31.474216976Z" level=info msg="Stopping pod sandbox: 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f" id=8eb03185-64ec-4993-866b-c74ce398fe33 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:41:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:31.474422516Z" level=info msg="Got pod network &{Name:router-default-c776d6877-hc4dc Namespace:openshift-ingress ID:13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f UID:4b453ab9-1ce4-45a1-b69d-c289991008f1 NetNS:/var/run/netns/78fd1792-7a62-4ba0-a338-116add7a24cd Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:41:31 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:31.474556835Z" level=info msg="Deleting pod openshift-ingress_router-default-c776d6877-hc4dc from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:41:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00583|bridge|INFO|bridge br-int: deleted interface 13573c504bb25b1 on port 20 Feb 23 17:41:31 ip-10-0-136-68 kernel: device 13573c504bb25b1 left promiscuous mode Feb 23 17:41:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:31.696951 2112 generic.go:296] "Generic (PLEG): container finished" podID=4b453ab9-1ce4-45a1-b69d-c289991008f1 containerID="cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73" exitCode=0 Feb 23 17:41:31 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:31.696991 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-c776d6877-hc4dc" event=&{ID:4b453ab9-1ce4-45a1-b69d-c289991008f1 Type:ContainerDied Data:cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73} Feb 23 17:41:32 ip-10-0-136-68 crio[2062]: 2023-02-23T17:41:31Z [verbose] Del: openshift-ingress:router-default-c776d6877-hc4dc:4b453ab9-1ce4-45a1-b69d-c289991008f1:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","dns":{},"ipam":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxage":5,"logfile-maxbackups":5,"logfile-maxsize":100,"name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay"} Feb 23 17:41:32 ip-10-0-136-68 crio[2062]: I0223 17:41:31.611205 82619 ovs.go:90] Maximum command line arguments set to: 191102 Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d3282d66e77a3d3caad02ba3e2c021f4f51a12f390aef8faaf0f8a6527538804-merged.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d3282d66e77a3d3caad02ba3e2c021f4f51a12f390aef8faaf0f8a6527538804-merged.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: run-utsns-78fd1792\x2d7a62\x2d4ba0\x2da338\x2d116add7a24cd.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: run-utsns-78fd1792\x2d7a62\x2d4ba0\x2da338\x2d116add7a24cd.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: run-ipcns-78fd1792\x2d7a62\x2d4ba0\x2da338\x2d116add7a24cd.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: run-ipcns-78fd1792\x2d7a62\x2d4ba0\x2da338\x2d116add7a24cd.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: run-netns-78fd1792\x2d7a62\x2d4ba0\x2da338\x2d116add7a24cd.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: run-netns-78fd1792\x2d7a62\x2d4ba0\x2da338\x2d116add7a24cd.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:32.220720174Z" level=info msg="Stopped pod sandbox: 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f" id=8eb03185-64ec-4993-866b-c74ce398fe33 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.418882 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpxwd\" (UniqueName: \"kubernetes.io/projected/4b453ab9-1ce4-45a1-b69d-c289991008f1-kube-api-access-wpxwd\") pod \"4b453ab9-1ce4-45a1-b69d-c289991008f1\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.418946 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b453ab9-1ce4-45a1-b69d-c289991008f1-service-ca-bundle\") pod \"4b453ab9-1ce4-45a1-b69d-c289991008f1\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.418977 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-default-certificate\") pod \"4b453ab9-1ce4-45a1-b69d-c289991008f1\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.419005 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-stats-auth\") pod \"4b453ab9-1ce4-45a1-b69d-c289991008f1\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.419027 2112 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-metrics-certs\") pod \"4b453ab9-1ce4-45a1-b69d-c289991008f1\" (UID: \"4b453ab9-1ce4-45a1-b69d-c289991008f1\") " Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: W0223 17:41:32.419722 2112 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4b453ab9-1ce4-45a1-b69d-c289991008f1/volumes/kubernetes.io~configmap/service-ca-bundle: clearQuota called, but quotas disabled Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.420225 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b453ab9-1ce4-45a1-b69d-c289991008f1-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "4b453ab9-1ce4-45a1-b69d-c289991008f1" (UID: "4b453ab9-1ce4-45a1-b69d-c289991008f1"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.428911 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b453ab9-1ce4-45a1-b69d-c289991008f1-kube-api-access-wpxwd" (OuterVolumeSpecName: "kube-api-access-wpxwd") pod "4b453ab9-1ce4-45a1-b69d-c289991008f1" (UID: "4b453ab9-1ce4-45a1-b69d-c289991008f1"). InnerVolumeSpecName "kube-api-access-wpxwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.428959 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "4b453ab9-1ce4-45a1-b69d-c289991008f1" (UID: "4b453ab9-1ce4-45a1-b69d-c289991008f1"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.433901 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "4b453ab9-1ce4-45a1-b69d-c289991008f1" (UID: "4b453ab9-1ce4-45a1-b69d-c289991008f1"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.439855 2112 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "4b453ab9-1ce4-45a1-b69d-c289991008f1" (UID: "4b453ab9-1ce4-45a1-b69d-c289991008f1"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f-userdata-shm.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f-userdata-shm.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4b453ab9\x2d1ce4\x2d45a1\x2db69d\x2dc289991008f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpxwd.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4b453ab9\x2d1ce4\x2d45a1\x2db69d\x2dc289991008f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpxwd.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4b453ab9\x2d1ce4\x2d45a1\x2db69d\x2dc289991008f1-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4b453ab9\x2d1ce4\x2d45a1\x2db69d\x2dc289991008f1-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4b453ab9\x2d1ce4\x2d45a1\x2db69d\x2dc289991008f1-volumes-kubernetes.io\x7esecret-default\x2dcertificate.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4b453ab9\x2d1ce4\x2d45a1\x2db69d\x2dc289991008f1-volumes-kubernetes.io\x7esecret-default\x2dcertificate.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4b453ab9\x2d1ce4\x2d45a1\x2db69d\x2dc289991008f1-volumes-kubernetes.io\x7esecret-stats\x2dauth.mount: Succeeded. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-4b453ab9\x2d1ce4\x2d45a1\x2db69d\x2dc289991008f1-volumes-kubernetes.io\x7esecret-stats\x2dauth.mount: Consumed 0 CPU time Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.519287 2112 reconciler.go:399] "Volume detached for volume \"kube-api-access-wpxwd\" (UniqueName: \"kubernetes.io/projected/4b453ab9-1ce4-45a1-b69d-c289991008f1-kube-api-access-wpxwd\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.519320 2112 reconciler.go:399] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b453ab9-1ce4-45a1-b69d-c289991008f1-service-ca-bundle\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.519330 2112 reconciler.go:399] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-default-certificate\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.519339 2112 reconciler.go:399] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-stats-auth\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.519347 2112 reconciler.go:399] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b453ab9-1ce4-45a1-b69d-c289991008f1-metrics-certs\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.700482 2112 kubelet.go:2157] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-c776d6877-hc4dc" event=&{ID:4b453ab9-1ce4-45a1-b69d-c289991008f1 Type:ContainerDied Data:13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f} Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.700568 2112 scope.go:115] "RemoveContainer" containerID="cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73" Feb 23 17:41:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:32.702329449Z" level=info msg="Removing container: cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73" id=1cfce1b2-dca4-4405-becd-fbc79d46c4d2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-burstable-pod4b453ab9_1ce4_45a1_b69d_c289991008f1.slice. Feb 23 17:41:32 ip-10-0-136-68 systemd[1]: kubepods-burstable-pod4b453ab9_1ce4_45a1_b69d_c289991008f1.slice: Consumed 8.044s CPU time Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.725503 2112 kubelet.go:2135] "SyncLoop DELETE" source="api" pods=[openshift-ingress/router-default-c776d6877-hc4dc] Feb 23 17:41:32 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:32.729579 2112 kubelet.go:2129] "SyncLoop REMOVE" source="api" pods=[openshift-ingress/router-default-c776d6877-hc4dc] Feb 23 17:41:32 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:32.740338050Z" level=info msg="Removed container cf4309e350b3417824d6ab6e5cdc58f589564a603d6449dc009f1b959274dc73: openshift-ingress/router-default-c776d6877-hc4dc/router" id=1cfce1b2-dca4-4405-becd-fbc79d46c4d2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:41:34 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:34.117883903Z" level=info msg="Stopping pod sandbox: 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f" id=9ab48cee-b406-4448-83b1-c3963268be64 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:41:34 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:34.117934915Z" level=info msg="Stopped pod sandbox (already stopped): 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f" id=9ab48cee-b406-4448-83b1-c3963268be64 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:41:34 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:34.119095 2112 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4b453ab9-1ce4-45a1-b69d-c289991008f1 path="/var/lib/kubelet/pods/4b453ab9-1ce4-45a1-b69d-c289991008f1/volumes" Feb 23 17:41:41 ip-10-0-136-68 root[82731]: machine-config-daemon[79932]: drain complete Feb 23 17:41:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:41.051533 2112 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent inotify watch for: /etc/kubernetes/kubelet-ca.crt" Feb 23 17:41:41 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:41:41.051783 2112 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 17:41:41 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 17:41:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00584|connmgr|INFO|br-ex<->unix#1994: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 17:41:41 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: kubelet.service: Current command vanished from the unit file, execution of the command list won't be resumed. Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 17:41:41 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 17:41:41 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 17:41:41 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 17:41:42 ip-10-0-136-68 systemd[1]: Reloading. Feb 23 17:41:42 ip-10-0-136-68 coreos-platform-chrony: /run/coreos-platform-chrony.conf already exists; skipping Feb 23 17:41:42 ip-10-0-136-68 systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service' Feb 23 17:41:51 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 17:41:51 ip-10-0-136-68 rpm-ostree[83055]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 17:41:51 ip-10-0-136-68 rpm-ostree[83055]: In idle state; will auto-exit in 61 seconds Feb 23 17:41:51 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 17:41:51 ip-10-0-136-68 rpm-ostree[83055]: client(id:machine-config-operator dbus:1.638 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) added; new total=1 Feb 23 17:41:51 ip-10-0-136-68 rpm-ostree[83055]: Locked sysroot Feb 23 17:41:51 ip-10-0-136-68 rpm-ostree[83055]: Initiated txn Rebase for client(id:machine-config-operator dbus:1.638 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0): /org/projectatomic/rpmostree1/rhcos Feb 23 17:41:51 ip-10-0-136-68 rpm-ostree[83055]: Process [pid: 83051 uid: 0 unit: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope] connected to transaction progress Feb 23 17:41:51 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 17:41:51 ip-10-0-136-68 rpm-ostree[83055]: Fetching ostree-unverified-image:docker://registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:171f6809871ab783c4c8996143a0466892bce38153750a4d2d61125d943cdff5 Feb 23 17:41:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:52.646649730Z" level=info msg="Stopping pod sandbox: 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f" id=b1ef96f6-d2d6-4e9e-b2f9-0deaf33d15ae name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:41:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:52.646712964Z" level=info msg="Stopped pod sandbox (already stopped): 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f" id=b1ef96f6-d2d6-4e9e-b2f9-0deaf33d15ae name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:41:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:52.647267203Z" level=info msg="Removing pod sandbox: 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f" id=0a0848a8-9627-4679-aef2-1a98a7539d31 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:41:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:41:52.654866099Z" level=info msg="Removed pod sandbox: 13573c504bb25b1cde2dc0ff3b4dfa8179a4e57808537dec9fe9e87db117424f" id=0a0848a8-9627-4679-aef2-1a98a7539d31 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:41:53 ip-10-0-136-68 rpm-ostree[83055]: layers stored: 1 needed: 50 (1.0 GB) Feb 23 17:41:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00585|connmgr|INFO|br-ex<->unix#1998: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:41:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00586|connmgr|INFO|br-int<->unix#1484: 40 flow_mods in the 30 s starting 52 s ago (5 adds, 35 deletes) Feb 23 17:42:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00587|connmgr|INFO|br-ex<->unix#2007: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:42:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00588|connmgr|INFO|br-ex<->unix#2011: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:42:27 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:systemd_network_generator_exec_t:s0 is not valid (left unmapped). Feb 23 17:42:27 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:systemd_socket_proxyd_exec_t:s0 is not valid (left unmapped). Feb 23 17:42:31 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:glusterd_exec_t:s0 is not valid (left unmapped). Feb 23 17:42:41 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:NetworkManager_dispatcher_exec_t:s0 is not valid (left unmapped). Feb 23 17:42:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00589|connmgr|INFO|br-ex<->unix#2020: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:42:42 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:rpmdb_exec_t:s0 is not valid (left unmapped). Feb 23 17:42:42 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:NetworkManager_dispatcher_chronyc_script_t:s0 is not valid (left unmapped). Feb 23 17:42:42 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:NetworkManager_dispatcher_cloud_script_t:s0 is not valid (left unmapped). Feb 23 17:42:42 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:NetworkManager_dispatcher_iscsid_script_t:s0 is not valid (left unmapped). Feb 23 17:42:42 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:NetworkManager_dispatcher_script_t:s0 is not valid (left unmapped). Feb 23 17:42:42 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:stalld_unit_file_t:s0 is not valid (left unmapped). Feb 23 17:42:43 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:stalld_exec_t:s0 is not valid (left unmapped). Feb 23 17:42:43 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:dbusd_unit_file_t:s0 is not valid (left unmapped). Feb 23 17:42:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:42:46.033963396Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=85c71107-d7f0-4bcb-a44d-605109631189 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:42:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:42:46.034159398Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=85c71107-d7f0-4bcb-a44d-605109631189 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:42:48 ip-10-0-136-68 rpm-ostree[83055]: Librepo version: 1.14.2 with CURL_GLOBAL_ACK_EINTR support (libcurl/7.61.1 OpenSSL/1.1.1k zlib/1.2.11 brotli/1.0.6 libidn2/2.2.0 libpsl/0.20.2 (+libidn2/2.2.0) libssh/0.9.6/openssl/zlib nghttp2/1.33.0) Feb 23 17:42:48 ip-10-0-136-68 rpm-ostree[83055]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:42:48 ip-10-0-136-68 rpm-ostree[83612]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:42:50 ip-10-0-136-68 rpm-ostree[83055]: Preparing pkg txn; enabled repos: ['coreos-extensions'] solvables: 58 Feb 23 17:42:52 ip-10-0-136-68 rpm-ostree[83055]: Imported 4 pkgs Feb 23 17:42:53 ip-10-0-136-68 rpm-ostree[83055]: Executed %post for kernel-rt-core in 134 ms Feb 23 17:42:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00590|connmgr|INFO|br-ex<->unix#2024: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:42:58 ip-10-0-136-68 rpm-ostree[83055]: Executed %post for kernel-rt-modules in 5311 ms Feb 23 17:43:04 ip-10-0-136-68 rpm-ostree[83055]: Executed %post for kernel-rt-modules-extra in 5463 ms Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree[83055]: Executed %post for kernel-rt-kvm in 5473 ms Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83962]: cp: cannot create regular file '/boot/vmlinuz-5.14.0-266.rt14.266.el9.x86_64': No such file or directory Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83965]: cp: cannot create regular file '/boot/System.map-5.14.0-266.rt14.266.el9.x86_64': No such file or directory Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83968]: cp: cannot create regular file '/boot/config-5.14.0-266.rt14.266.el9.x86_64': No such file or directory Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83972]: cp: cannot create regular file '/boot/.vmlinuz-5.14.0-266.rt14.266.el9.x86_64.hmac': No such file or directory Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83975]: ln: failed to create symbolic link '/boot/symvers-5.14.0-266.rt14.266.el9.x86_64.gz': No such file or directory Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83992]: grub2-probe: error: failed to get canonical path of `tmpfs'. Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83991]: No path or device is specified. Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83991]: Usage: grub2-probe [OPTION...] [OPTION]... [PATH|DEVICE] Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83991]: Try 'grub2-probe --help' or 'grub2-probe --usage' for more information. Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83993]: grub2-mkrelpath: error: failed to get canonical path of `/boot/vmlinuz-5.14.0-266.rt14.266.el9.x86_64'. Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83994]: dirname: missing operand Feb 23 17:43:09 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[83994]: Try 'dirname --help' for more information. Feb 23 17:43:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00591|connmgr|INFO|br-ex<->unix#2033: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:43:15 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84058]: /usr/bin/dracut: line 1054: /sys/module/firmware_class/parameters/path: No such file or directory Feb 23 17:43:16 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84599]: mknod: /var/tmp/dracut.A1YWTE/initramfs/dev/null: Operation not permitted Feb 23 17:43:16 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84600]: mknod: /var/tmp/dracut.A1YWTE/initramfs/dev/kmsg: Operation not permitted Feb 23 17:43:16 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84601]: mknod: /var/tmp/dracut.A1YWTE/initramfs/dev/console: Operation not permitted Feb 23 17:43:16 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84602]: mknod: /var/tmp/dracut.A1YWTE/initramfs/dev/random: Operation not permitted Feb 23 17:43:16 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84603]: mknod: /var/tmp/dracut.A1YWTE/initramfs/dev/urandom: Operation not permitted Feb 23 17:43:17 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84798]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants emergency.target systemd-vconsole-setup.service Feb 23 17:43:17 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84799]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants rescue.target systemd-vconsole-setup.service Feb 23 17:43:17 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84806]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants systemd-ask-password-console.service systemd-vconsole-setup.service Feb 23 17:43:17 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84808]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs set-default multi-user.target Feb 23 17:43:17 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84842]: mknod: /var/tmp/dracut.A1YWTE/initramfs/dev/random: Operation not permitted Feb 23 17:43:17 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84040]: dracut: Cannot create /dev/random Feb 23 17:43:17 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84040]: dracut: To create an initramfs with fips support, dracut has to run as root Feb 23 17:43:18 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[84946]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs set-default initrd.target Feb 23 17:43:19 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85178]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs enable dbus-broker.service Feb 23 17:43:20 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85346]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-files.service rhcos-afterburn-checkin.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85366]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target coreos-post-ignition-checks.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85373]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target coreos-teardown-initramfs.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85379]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target coreos-gpt-setup.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85383]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target coreos-kargs-reboot.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85388]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target coreos-boot-edit.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85392]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target coreos-ignition-unique-boot.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85397]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target coreos-unique-boot.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85401]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target coreos-ignition-setup-user.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85425]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires initrd-switch-root.target coreos-live-unmount-tmpfs-var.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85429]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires initrd-root-fs.target coreos-livepxe-rootfs.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85433]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target coreos-live-clear-sssd-cache.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85438]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires default.target coreos-liveiso-persist-osmet.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85443]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires default.target coreos-livepxe-persist-osmet.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85454]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires initrd.target coreos-propagate-multipath-conf.service Feb 23 17:43:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00592|connmgr|INFO|br-int<->unix#1484: 1 flow_mods 10 s ago (1 adds) Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85477]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target coreos-enable-network.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85482]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target coreos-copy-firstboot-network.service Feb 23 17:43:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85518]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs enable nm-initrd.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85574]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target ignition-ostree-mount-var.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85579]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target ignition-ostree-populate-var.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85592]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target ignition-ostree-transposefs-detect.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85595]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target ignition-ostree-transposefs-save.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85598]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target ignition-ostree-transposefs-restore.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85603]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target ignition-ostree-mount-firstboot-sysroot.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85606]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target ignition-ostree-uuid-boot.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85612]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target ignition-ostree-uuid-root.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85618]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful-subsequent.target ignition-ostree-mount-subsequent-sysroot.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85625]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target ignition-ostree-growfs.service Feb 23 17:43:22 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85630]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target ignition-ostree-check-rootfs-size.service Feb 23 17:43:23 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85797]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target rhcos-fips.service Feb 23 17:43:23 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85798]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-diskful.target rhcos-fips-finish.service Feb 23 17:43:23 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85806]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires ignition-complete.target rhcos-fail-boot-for-legacy-luks-config.service Feb 23 17:43:23 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85880]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires sysinit.target coreos-check-kernel.service Feb 23 17:43:23 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85907]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants cryptsetup.target clevis-luks-askpass.path Feb 23 17:43:23 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[85949]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-requires initrd.target coreos-touch-run-agetty.service Feb 23 17:43:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00593|connmgr|INFO|br-ex<->unix#2037: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:43:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[86966]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs enable multipathd-configure.service Feb 23 17:43:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[86970]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs enable multipathd.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87440]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants initrd.target dracut-cmdline.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87444]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants initrd.target dracut-cmdline-ask.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87448]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants initrd.target dracut-initqueue.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87453]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants initrd.target dracut-mount.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87456]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants initrd.target dracut-pre-mount.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87459]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants initrd.target dracut-pre-pivot.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87463]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants initrd.target dracut-pre-trigger.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87467]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.A1YWTE/initramfs add-wants initrd.target dracut-pre-udev.service Feb 23 17:43:35 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[87536]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.A1YWTE/initramfs add-wants emergency.target ignition-virtio-dump-journal.service Feb 23 17:43:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00594|connmgr|INFO|br-ex<->unix#2046: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:43:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00595|connmgr|INFO|br-ex<->unix#2050: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:44:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00596|connmgr|INFO|br-ex<->unix#2059: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:44:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00597|connmgr|INFO|br-int<->unix#1484: 93 flow_mods in the last 58 s (23 adds, 28 deletes, 42 modifications) Feb 23 17:44:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00598|connmgr|INFO|br-ex<->unix#2063: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:44:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00599|connmgr|INFO|br-ex<->unix#2072: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: Executed %posttrans for kernel-rt-core in 95837 ms Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: Executed %posttrans for kernel-rt-modules in 107 ms Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: Executed %transfiletriggerin(shared-mime-info) for usr/share/mime in 110 ms; 824 matched files Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(usr/share/info) for info Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(usr/lib64/gio/modules) for glib2 Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(usr/share/glib-2.0/schemas) for glib2 Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(lib) for glibc-common Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(lib64) for glibc-common Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: Executed %transfiletriggerin(glibc-common) for lib, lib64, usr/lib, usr/lib64 in 228 ms; 14702 matched files Feb 23 17:44:45 ip-10-0-136-68 rpm-ostree[83055]: Executed %transfiletriggerin(systemd-udev) for usr/lib/udev/hwdb.d in 100 ms; 29 matched files Feb 23 17:44:46 ip-10-0-136-68 rpm-ostree[83055]: Executed %transfiletriggerin(systemd-udev) for usr/lib/udev/rules.d in 101 ms; 79 matched files Feb 23 17:44:46 ip-10-0-136-68 rpm-ostree[83055]: sanitycheck(/usr/bin/true) successful Feb 23 17:44:46 ip-10-0-136-68 rpm-ostree[83055]: Regenerating rpmdb for target Feb 23 17:44:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00600|connmgr|INFO|br-ex<->unix#2076: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100459]: /usr/bin/dracut: line 1054: /sys/module/firmware_class/parameters/path: No such file or directory Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: Executing: /usr/bin/dracut --reproducible -v --add ostree --tmpdir=/tmp/dracut -f /tmp/initramfs.img --no-hostonly --kver 5.14.0-266.rt14.266.el9.x86_64 Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100940]: mknod: /tmp/dracut/dracut.KXG0dg/initramfs/dev/null: Operation not permitted Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100941]: mknod: /tmp/dracut/dracut.KXG0dg/initramfs/dev/kmsg: Operation not permitted Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100942]: mknod: /tmp/dracut/dracut.KXG0dg/initramfs/dev/console: Operation not permitted Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100943]: mknod: /tmp/dracut/dracut.KXG0dg/initramfs/dev/random: Operation not permitted Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100944]: mknod: /tmp/dracut/dracut.KXG0dg/initramfs/dev/urandom: Operation not permitted Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: bash *** Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: systemd *** Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: fips *** Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[101187]: mknod: /tmp/dracut/dracut.KXG0dg/initramfs/dev/random: Operation not permitted Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: Cannot create /dev/random Feb 23 17:44:59 ip-10-0-136-68 rpm-ostree[100441]: dracut: To create an initramfs with fips support, dracut has to run as root Feb 23 17:45:00 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: scsi-rules *** Feb 23 17:45:00 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: systemd-initrd *** Feb 23 17:45:00 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: systemd-sysusers *** Feb 23 17:45:00 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: modsign *** Feb 23 17:45:00 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rdma *** Feb 23 17:45:00 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: dbus-broker *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: dbus *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: coreos-sysctl *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: i18n *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: ignition-godebug *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: azure-udev-rules *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rhcos-need-network-manager *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: afterburn *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: ignition *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rhcos-afterburn-checkin *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: coreos-ignition *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: coreos-live *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: coreos-multipath *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: coreos-network *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: network-manager *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: ignition-conf *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: ignition-ostree *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: network *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rhcos-fde *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rhcos-fips *** Feb 23 17:45:01 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rhcos-check-luks-syntax *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: ifcfg *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: url-lib *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: coreos-kernel *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rdcore *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rhcos-mke2fs *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rhcos-tuned-bits *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: clevis *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: clevis-pin-null *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: clevis-pin-sss *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: clevis-pin-tang *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: clevis-pin-tpm2 *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: coreos-agetty-workaround *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: crypt *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: dm *** Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: Skipping udev rule: 64-device-mapper.rules Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: Skipping udev rule: 60-persistent-storage-dm.rules Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: Skipping udev rule: 55-dm.rules Feb 23 17:45:02 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: kernel-modules *** Feb 23 17:45:06 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: kernel-modules-extra *** Feb 23 17:45:06 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: kernel-network-modules *** Feb 23 17:45:07 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: mdraid *** Feb 23 17:45:07 ip-10-0-136-68 rpm-ostree[100441]: dracut: Skipping udev rule: 64-md-raid.rules Feb 23 17:45:07 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: multipath *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: Skipping udev rule: 40-multipath.rules Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: Skipping udev rule: 56-multipath.rules Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: nvdimm *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: qemu *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: qemu-net *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: cifs *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: lunmask *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: nvmf *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: resume *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: rootfs-block *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: terminfo *** Feb 23 17:45:08 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: udev-rules *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: Skipping udev rule: 91-permissions.rules Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: Skipping udev rule: 80-drivers-modprobe.rules Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: virtiofs *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: walinuxagent *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: dracut-systemd *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: ostree *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: usrmount *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: base *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: emergency-shell-setup *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: fs-lib *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: journal-conf *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: microcode_ctl-fw_dir_override *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl module: mangling fw_dir Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103803]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: intel: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103821]: intel-06-2d-07: model 'GenuineIntel 06-2d-07', path ' intel-ucode/06-2d-07', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103826]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103826]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103826]: Dependency check for required intel succeeded: result=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: intel-06-2d-07: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07" to fw_dir variable Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103844]: intel-06-4e-03: model 'GenuineIntel 06-4e-03', path ' intel-ucode/06-4e-03', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103849]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103849]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103849]: Dependency check for required intel succeeded: result=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103844]: intel-06-4e-03: caveat is disabled in configuration Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: kernel version "5.14.0-266.rt14.266.el9.x86_64" failed early load check for "intel-06-4e-03", skipping Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103864]: intel-06-4f-01: model 'GenuineIntel 06-4f-01', path ' intel-ucode/06-4f-01', kvers ' 4.17.0 3.10.0-894 3.10.0-862.6.1 3.10.0-693.35.1 3.10.0-514.52.1 3.10.0-327.70.1 2.6.32-754.1.1 2.6.32-573.58.1 2.6.32-504.71.1 2.6.32-431.90.1 2.6.32-358.90.1' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103869]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103869]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103869]: Dependency check for required intel succeeded: result=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103864]: intel-06-4f-01: caveat is disabled in configuration Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: kernel version "5.14.0-266.rt14.266.el9.x86_64" failed early load check for "intel-06-4f-01", skipping Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103884]: intel-06-55-04: model 'GenuineIntel 06-55-04', path ' intel-ucode/06-55-04', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103889]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103889]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103889]: Dependency check for required intel succeeded: result=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: intel-06-55-04: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04" to fw_dir variable Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103907]: intel-06-5e-03: model 'GenuineIntel 06-5e-03', path ' intel-ucode/06-5e-03', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103912]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103912]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103912]: Dependency check for required intel succeeded: result=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: intel-06-5e-03: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03" to fw_dir variable Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103930]: intel-06-8c-01: model 'GenuineIntel 06-8c-01', path ' intel-ucode/06-8c-01', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103935]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103935]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103935]: Dependency check for required intel succeeded: result=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: intel-06-8c-01: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01" to fw_dir variable Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103953]: intel-06-8e-9e-0x-0xca: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103958]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103958]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103958]: Dependency check for required intel succeeded: result=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103953]: intel-06-8e-9e-0x-0xca: caveat is disabled in configuration Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: kernel version "5.14.0-266.rt14.266.el9.x86_64" failed early load check for "intel-06-8e-9e-0x-0xca", skipping Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103973]: intel-06-8e-9e-0x-dell: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103978]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103978]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[103978]: Dependency check for required intel succeeded: result=0 Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: intel-06-8e-9e-0x-dell: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell" to fw_dir variable Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell /usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07 /usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates/5.14.0-266.rt14.266.el9.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-266.rt14.266.el9.x86_64 /lib/firmware" Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including module: shutdown *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Including modules done *** Feb 23 17:45:09 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Installing kernel module dependencies *** Feb 23 17:45:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00601|connmgr|INFO|br-ex<->unix#2085: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:45:11 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Installing kernel module dependencies done *** Feb 23 17:45:11 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Resolving executable dependencies *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Resolving executable dependencies done *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Hardlinking files *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[106232]: dracut: Mode: real Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[106232]: dracut: Files: 2200 Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[106232]: dracut: Linked: 6 files Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[106232]: dracut: Compared: 0 xattrs Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[106232]: dracut: Compared: 606 files Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[106232]: dracut: Saved: 1.05 MiB Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[106232]: dracut: Duration: 0.013041 seconds Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Hardlinking files done *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Generating early-microcode cpio image *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Constructing AuthenticAMD.bin *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Store current command line parameters *** Feb 23 17:45:14 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Creating image file '/tmp/initramfs.img' *** Feb 23 17:45:21 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00602|connmgr|INFO|br-int<->unix#1484: 5 flow_mods in the 49 s starting 57 s ago (2 adds, 3 deletes) Feb 23 17:45:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00603|connmgr|INFO|br-ex<->unix#2089: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:45:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00604|connmgr|INFO|br-ex<->unix#2099: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:45:52 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:45:52.664889 2112 scope.go:115] "RemoveContainer" containerID="0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81" Feb 23 17:45:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:45:52.665916542Z" level=info msg="Removing container: 0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81" id=e780230f-8ce1-4ebf-bb66-d79eadf809d0 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:45:52 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-93cb3c277b7c2c771b78a052e249109c6bd229b3c89548e3a6ad1cc324100cd2-merged.mount: Succeeded. Feb 23 17:45:52 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-93cb3c277b7c2c771b78a052e249109c6bd229b3c89548e3a6ad1cc324100cd2-merged.mount: Consumed 0 CPU time Feb 23 17:45:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:45:52.711728805Z" level=info msg="Removed container 0015022f29391366b4f82cdec02e894dbfe74de2e9597cd86462bf2830ed7b81: openshift-debug-gwh9j/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=e780230f-8ce1-4ebf-bb66-d79eadf809d0 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:45:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:45:52.712951648Z" level=info msg="Stopping pod sandbox: b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651" id=42e438aa-3e69-42b5-b205-ed28dcf43f98 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:45:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:45:52.712986530Z" level=info msg="Stopped pod sandbox (already stopped): b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651" id=42e438aa-3e69-42b5-b205-ed28dcf43f98 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 17:45:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:45:52.713223105Z" level=info msg="Removing pod sandbox: b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651" id=a2b82794-5ae8-4935-9b15-3a07240a5fb4 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:45:52 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:45:52.720594402Z" level=info msg="Removed pod sandbox: b6f91b255cbfdc22a0605b75c29a9650ca3a1c61d888f5e29751f80e27f40651" id=a2b82794-5ae8-4935-9b15-3a07240a5fb4 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 17:45:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00605|connmgr|INFO|br-ex<->unix#2103: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:45:57 ip-10-0-136-68 rpm-ostree[100441]: dracut: *** Creating initramfs image file '/tmp/initramfs.img' done *** Feb 23 17:45:57 ip-10-0-136-68 rpm-ostree[83055]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:45:59 ip-10-0-136-68 rpm-ostree[83055]: Wrote commit: cbb1434bd95244cc194a601e83cc07f573f4704cfc6840b355ab0d84a72d06ef; New objects: meta:42 content:9 totaling 133.6 MB) Feb 23 17:45:59 ip-10-0-136-68 kernel: SELinux: Context system_u:object_r:NetworkManager_dispatcher_console_script_t:s0 is not valid (left unmapped). Feb 23 17:46:00 ip-10-0-136-68 rpm-ostree[83055]: note: Deploying commit cbb1434bd95244cc194a601e83cc07f573f4704cfc6840b355ab0d84a72d06ef which contains content in /var/lib that will be ignored. Feb 23 17:46:00 ip-10-0-136-68 systemd[1]: Started OSTree Finalize Staged Deployment. Feb 23 17:46:00 ip-10-0-136-68 rpm-ostree[83055]: Created new deployment /ostree/deploy/rhcos/deploy/cbb1434bd95244cc194a601e83cc07f573f4704cfc6840b355ab0d84a72d06ef.0 Feb 23 17:46:00 ip-10-0-136-68 rpm-ostree[83055]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:46:00 ip-10-0-136-68 rpm-ostree[83055]: Pruned container image layers: 0 Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: Pruned container image layers: 50 Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: Txn Rebase on /org/projectatomic/rpmostree1/rhcos successful Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: Unlocked sysroot Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: Process [pid: 83051 uid: 0 unit: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope] disconnected from transaction progress Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: client(id:machine-config-operator dbus:1.638 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) vanished; remaining=0 Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: In idle state; will auto-exit in 60 seconds Feb 23 17:46:01 ip-10-0-136-68 root[115880]: machine-config-daemon[79932]: Running rpm-ostree [kargs --append=systemd.unified_cgroup_hierarchy=0 --append=systemd.legacy_systemd_cgroup_controller=1] Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: client(id:machine-config-operator dbus:1.641 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) added; new total=1 Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: Locked sysroot Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: Initiated txn KernelArgs for client(id:machine-config-operator dbus:1.641 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0): /org/projectatomic/rpmostree1/rhcos Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: Process [pid: 115881 uid: 0 unit: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope] connected to transaction progress Feb 23 17:46:01 ip-10-0-136-68 rpm-ostree[83055]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:46:02 ip-10-0-136-68 rpm-ostree[115892]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:46:04 ip-10-0-136-68 rpm-ostree[83055]: Preparing pkg txn; enabled repos: [] solvables: 0 Feb 23 17:46:04 ip-10-0-136-68 rpm-ostree[83055]: Executed %post for kernel-rt-core in 126 ms Feb 23 17:46:10 ip-10-0-136-68 rpm-ostree[83055]: Executed %post for kernel-rt-modules in 5329 ms Feb 23 17:46:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00606|connmgr|INFO|br-ex<->unix#2112: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:46:15 ip-10-0-136-68 rpm-ostree[83055]: Executed %post for kernel-rt-modules-extra in 5470 ms Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree[83055]: Executed %post for kernel-rt-kvm in 5454 ms Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116220]: cp: cannot create regular file '/boot/vmlinuz-5.14.0-266.rt14.266.el9.x86_64': No such file or directory Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116223]: cp: cannot create regular file '/boot/System.map-5.14.0-266.rt14.266.el9.x86_64': No such file or directory Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116226]: cp: cannot create regular file '/boot/config-5.14.0-266.rt14.266.el9.x86_64': No such file or directory Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116229]: cp: cannot create regular file '/boot/.vmlinuz-5.14.0-266.rt14.266.el9.x86_64.hmac': No such file or directory Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116232]: ln: failed to create symbolic link '/boot/symvers-5.14.0-266.rt14.266.el9.x86_64.gz': No such file or directory Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116249]: grub2-probe: error: failed to get canonical path of `tmpfs'. Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116248]: No path or device is specified. Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116248]: Usage: grub2-probe [OPTION...] [OPTION]... [PATH|DEVICE] Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116248]: Try 'grub2-probe --help' or 'grub2-probe --usage' for more information. Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116250]: grub2-mkrelpath: error: failed to get canonical path of `/boot/vmlinuz-5.14.0-266.rt14.266.el9.x86_64'. Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116251]: dirname: missing operand Feb 23 17:46:21 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116251]: Try 'dirname --help' for more information. Feb 23 17:46:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00607|connmgr|INFO|br-ex<->unix#2116: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:46:26 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116318]: /usr/bin/dracut: line 1054: /sys/module/firmware_class/parameters/path: No such file or directory Feb 23 17:46:28 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116865]: mknod: /var/tmp/dracut.TqTYMd/initramfs/dev/null: Operation not permitted Feb 23 17:46:28 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116866]: mknod: /var/tmp/dracut.TqTYMd/initramfs/dev/kmsg: Operation not permitted Feb 23 17:46:28 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116867]: mknod: /var/tmp/dracut.TqTYMd/initramfs/dev/console: Operation not permitted Feb 23 17:46:28 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116868]: mknod: /var/tmp/dracut.TqTYMd/initramfs/dev/random: Operation not permitted Feb 23 17:46:28 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116869]: mknod: /var/tmp/dracut.TqTYMd/initramfs/dev/urandom: Operation not permitted Feb 23 17:46:29 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117064]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants emergency.target systemd-vconsole-setup.service Feb 23 17:46:29 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117066]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants rescue.target systemd-vconsole-setup.service Feb 23 17:46:29 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117067]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants systemd-ask-password-console.service systemd-vconsole-setup.service Feb 23 17:46:29 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117071]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs set-default multi-user.target Feb 23 17:46:29 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117105]: mknod: /var/tmp/dracut.TqTYMd/initramfs/dev/random: Operation not permitted Feb 23 17:46:29 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116300]: dracut: Cannot create /dev/random Feb 23 17:46:29 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[116300]: dracut: To create an initramfs with fips support, dracut has to run as root Feb 23 17:46:29 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117218]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs set-default initrd.target Feb 23 17:46:31 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117475]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs enable dbus-broker.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117655]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-files.service rhcos-afterburn-checkin.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117679]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target coreos-post-ignition-checks.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117684]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target coreos-teardown-initramfs.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117693]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target coreos-gpt-setup.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117698]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target coreos-kargs-reboot.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117703]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target coreos-boot-edit.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117707]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target coreos-ignition-unique-boot.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117710]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target coreos-unique-boot.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117713]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target coreos-ignition-setup-user.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117735]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires initrd-switch-root.target coreos-live-unmount-tmpfs-var.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117740]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires initrd-root-fs.target coreos-livepxe-rootfs.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117743]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target coreos-live-clear-sssd-cache.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117746]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires default.target coreos-liveiso-persist-osmet.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117749]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires default.target coreos-livepxe-persist-osmet.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117759]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires initrd.target coreos-propagate-multipath-conf.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117773]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target coreos-enable-network.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117778]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target coreos-copy-firstboot-network.service Feb 23 17:46:32 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117816]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs enable nm-initrd.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117879]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target ignition-ostree-mount-var.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117885]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target ignition-ostree-populate-var.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117897]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target ignition-ostree-transposefs-detect.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117903]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target ignition-ostree-transposefs-save.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117907]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target ignition-ostree-transposefs-restore.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117910]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target ignition-ostree-mount-firstboot-sysroot.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117913]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target ignition-ostree-uuid-boot.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117917]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target ignition-ostree-uuid-root.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117922]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful-subsequent.target ignition-ostree-mount-subsequent-sysroot.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117929]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target ignition-ostree-growfs.service Feb 23 17:46:33 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[117934]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target ignition-ostree-check-rootfs-size.service Feb 23 17:46:34 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[118098]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target rhcos-fips.service Feb 23 17:46:34 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[118099]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-diskful.target rhcos-fips-finish.service Feb 23 17:46:34 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[118108]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires ignition-complete.target rhcos-fail-boot-for-legacy-luks-config.service Feb 23 17:46:34 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[118176]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires sysinit.target coreos-check-kernel.service Feb 23 17:46:34 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[118201]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants cryptsetup.target clevis-luks-askpass.path Feb 23 17:46:34 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[118235]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-requires initrd.target coreos-touch-run-agetty.service Feb 23 17:46:41 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00608|connmgr|INFO|br-ex<->unix#2125: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:46:43 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119204]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs enable multipathd-configure.service Feb 23 17:46:43 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119207]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs enable multipathd.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119683]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants initrd.target dracut-cmdline.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119687]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants initrd.target dracut-cmdline-ask.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119690]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants initrd.target dracut-initqueue.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119694]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants initrd.target dracut-mount.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119698]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants initrd.target dracut-pre-mount.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119701]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants initrd.target dracut-pre-pivot.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119704]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants initrd.target dracut-pre-trigger.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119707]: rpm-ostree-systemctl: Ignored non-preset command: -q --root /var/tmp/dracut.TqTYMd/initramfs add-wants initrd.target dracut-pre-udev.service Feb 23 17:46:46 ip-10-0-136-68 rpm-ostree(kernel-rt-core.posttrans)[119769]: rpm-ostree-systemctl: Ignored non-preset command: -q --root=/var/tmp/dracut.TqTYMd/initramfs add-wants emergency.target ignition-virtio-dump-journal.service Feb 23 17:46:56 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00609|connmgr|INFO|br-ex<->unix#2129: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:11 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00610|connmgr|INFO|br-ex<->unix#2138: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00611|connmgr|INFO|br-int<->unix#1484: 22 flow_mods in the 1 s starting 10 s ago (6 adds, 4 deletes, 12 modifications) Feb 23 17:47:26 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00612|connmgr|INFO|br-ex<->unix#2142: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00613|connmgr|INFO|br-ex<->unix#2146: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00614|connmgr|INFO|br-ex<->unix#2149: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00615|connmgr|INFO|br-ex<->unix#2152: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:31 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00616|connmgr|INFO|br-ex<->unix#2155: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00617|connmgr|INFO|br-ex<->unix#2163: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00618|connmgr|INFO|br-ex<->unix#2166: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00619|connmgr|INFO|br-ex<->unix#2169: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:32 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00620|connmgr|INFO|br-ex<->unix#2172: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:33 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00621|connmgr|INFO|br-ex<->unix#2175: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:33 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00622|connmgr|INFO|br-ex<->unix#2178: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:33 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00623|connmgr|INFO|br-ex<->unix#2181: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:33 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00624|connmgr|INFO|br-ex<->unix#2184: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00625|connmgr|INFO|br-ex<->unix#2187: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00626|connmgr|INFO|br-ex<->unix#2190: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00627|connmgr|INFO|br-ex<->unix#2193: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:34 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00628|connmgr|INFO|br-ex<->unix#2196: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00629|connmgr|INFO|br-ex<->unix#2199: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:35 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00630|connmgr|INFO|br-ex<->unix#2202: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00631|connmgr|INFO|br-ex<->unix#2205: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:37 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00632|connmgr|INFO|br-ex<->unix#2208: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:38 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00633|connmgr|INFO|br-ex<->unix#2211: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:38 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00634|connmgr|INFO|br-ex<->unix#2214: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:38 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00635|connmgr|INFO|br-ex<->unix#2217: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:38 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00636|connmgr|INFO|br-ex<->unix#2220: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:38 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00637|connmgr|INFO|br-ex<->unix#2223: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:38 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00638|connmgr|INFO|br-ex<->unix#2226: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00639|connmgr|INFO|br-ex<->unix#2229: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00640|connmgr|INFO|br-ex<->unix#2232: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00641|connmgr|INFO|br-ex<->unix#2235: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00642|connmgr|INFO|br-ex<->unix#2238: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00643|connmgr|INFO|br-ex<->unix#2241: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00644|connmgr|INFO|br-ex<->unix#2244: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00645|connmgr|INFO|br-ex<->unix#2247: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:39 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00646|connmgr|INFO|br-ex<->unix#2250: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:40 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00647|connmgr|INFO|br-ex<->unix#2253: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:40 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00648|connmgr|INFO|br-ex<->unix#2256: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:47:46.037589231Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710" id=e07d1451-09e3-4e15-a421-8d459e29e0b0 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:47:46 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:47:46.037828533Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:884319081b9650108dd2a638921522ef3df145e924d566325c975ca28709af4c,RepoTags:[],RepoDigests:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be64c744b92d1b1df463d84d8277c38069f3ce4e8e95ce84d4a6ffac6dc53710],Size_:350038335,Uid:nil,Username:,Spec:nil,Pinned:false,},Info:map[string]string{},}" id=e07d1451-09e3-4e15-a421-8d459e29e0b0 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:47:55 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00649|connmgr|INFO|br-ex<->unix#2260: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:47:56 ip-10-0-136-68 rpm-ostree[83055]: Executed %posttrans for kernel-rt-core in 95566 ms Feb 23 17:47:56 ip-10-0-136-68 rpm-ostree[83055]: Executed %posttrans for kernel-rt-modules in 138 ms Feb 23 17:47:56 ip-10-0-136-68 rpm-ostree[83055]: Executed %transfiletriggerin(shared-mime-info) for usr/share/mime in 158 ms; 824 matched files Feb 23 17:47:56 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(usr/share/info) for info Feb 23 17:47:56 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(usr/lib64/gio/modules) for glib2 Feb 23 17:47:56 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(usr/share/glib-2.0/schemas) for glib2 Feb 23 17:47:56 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(lib) for glibc-common Feb 23 17:47:56 ip-10-0-136-68 rpm-ostree[83055]: No files matched %transfiletriggerin(lib64) for glibc-common Feb 23 17:47:57 ip-10-0-136-68 rpm-ostree[83055]: Executed %transfiletriggerin(glibc-common) for lib, lib64, usr/lib, usr/lib64 in 217 ms; 14702 matched files Feb 23 17:47:57 ip-10-0-136-68 rpm-ostree[83055]: Executed %transfiletriggerin(systemd-udev) for usr/lib/udev/hwdb.d in 149 ms; 29 matched files Feb 23 17:47:57 ip-10-0-136-68 rpm-ostree[83055]: Executed %transfiletriggerin(systemd-udev) for usr/lib/udev/rules.d in 98 ms; 79 matched files Feb 23 17:47:57 ip-10-0-136-68 rpm-ostree[83055]: sanitycheck(/usr/bin/true) successful Feb 23 17:47:58 ip-10-0-136-68 rpm-ostree[83055]: Regenerating rpmdb for target Feb 23 17:48:10 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00650|connmgr|INFO|br-ex<->unix#2269: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132691]: /usr/bin/dracut: line 1054: /sys/module/firmware_class/parameters/path: No such file or directory Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: Executing: /usr/bin/dracut --reproducible -v --add ostree --tmpdir=/tmp/dracut -f /tmp/initramfs.img --no-hostonly --kver 5.14.0-266.rt14.266.el9.x86_64 Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Feb 23 17:48:10 ip-10-0-136-68 rpm-ostree[132673]: dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[133172]: mknod: /tmp/dracut/dracut.v2HQ0T/initramfs/dev/null: Operation not permitted Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[133173]: mknod: /tmp/dracut/dracut.v2HQ0T/initramfs/dev/kmsg: Operation not permitted Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[133174]: mknod: /tmp/dracut/dracut.v2HQ0T/initramfs/dev/console: Operation not permitted Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[133175]: mknod: /tmp/dracut/dracut.v2HQ0T/initramfs/dev/random: Operation not permitted Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[133176]: mknod: /tmp/dracut/dracut.v2HQ0T/initramfs/dev/urandom: Operation not permitted Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: bash *** Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: systemd *** Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: fips *** Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[133401]: mknod: /tmp/dracut/dracut.v2HQ0T/initramfs/dev/random: Operation not permitted Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: Cannot create /dev/random Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: To create an initramfs with fips support, dracut has to run as root Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: scsi-rules *** Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: systemd-initrd *** Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: systemd-sysusers *** Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: modsign *** Feb 23 17:48:11 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rdma *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: dbus-broker *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: dbus *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: coreos-sysctl *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: i18n *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: ignition-godebug *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: azure-udev-rules *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rhcos-need-network-manager *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: afterburn *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: ignition *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rhcos-afterburn-checkin *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: coreos-ignition *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: coreos-live *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: coreos-multipath *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: coreos-network *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: network-manager *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: ignition-conf *** Feb 23 17:48:12 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: ignition-ostree *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: network *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rhcos-fde *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rhcos-fips *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rhcos-check-luks-syntax *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: ifcfg *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: url-lib *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: coreos-kernel *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rdcore *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rhcos-mke2fs *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rhcos-tuned-bits *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: clevis *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: clevis-pin-null *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: clevis-pin-sss *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: clevis-pin-tang *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: clevis-pin-tpm2 *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: coreos-agetty-workaround *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: crypt *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: dm *** Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: Skipping udev rule: 64-device-mapper.rules Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: Skipping udev rule: 60-persistent-storage-dm.rules Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: Skipping udev rule: 55-dm.rules Feb 23 17:48:13 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: kernel-modules *** Feb 23 17:48:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00651|connmgr|INFO|br-int<->unix#1484: 878 flow_mods in the 54 s starting 58 s ago (256 adds, 262 deletes, 360 modifications) Feb 23 17:48:18 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: kernel-modules-extra *** Feb 23 17:48:18 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: kernel-network-modules *** Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: mdraid *** Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: Skipping udev rule: 64-md-raid.rules Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: multipath *** Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: Skipping udev rule: 40-multipath.rules Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: Skipping udev rule: 56-multipath.rules Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: nvdimm *** Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: qemu *** Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: qemu-net *** Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: cifs *** Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: lunmask *** Feb 23 17:48:19 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: nvmf *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: resume *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: rootfs-block *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: terminfo *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: udev-rules *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: Skipping udev rule: 91-permissions.rules Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: Skipping udev rule: 80-drivers-modprobe.rules Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: virtiofs *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: walinuxagent *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: dracut-systemd *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: ostree *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: usrmount *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: base *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: emergency-shell-setup *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: fs-lib *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: journal-conf *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: microcode_ctl-fw_dir_override *** Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl module: mangling fw_dir Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[135972]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: intel: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[135990]: intel-06-2d-07: model 'GenuineIntel 06-2d-07', path ' intel-ucode/06-2d-07', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[135995]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[135995]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[135995]: Dependency check for required intel succeeded: result=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: intel-06-2d-07: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07" to fw_dir variable Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136013]: intel-06-4e-03: model 'GenuineIntel 06-4e-03', path ' intel-ucode/06-4e-03', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136018]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136018]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136018]: Dependency check for required intel succeeded: result=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136013]: intel-06-4e-03: caveat is disabled in configuration Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: kernel version "5.14.0-266.rt14.266.el9.x86_64" failed early load check for "intel-06-4e-03", skipping Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136033]: intel-06-4f-01: model 'GenuineIntel 06-4f-01', path ' intel-ucode/06-4f-01', kvers ' 4.17.0 3.10.0-894 3.10.0-862.6.1 3.10.0-693.35.1 3.10.0-514.52.1 3.10.0-327.70.1 2.6.32-754.1.1 2.6.32-573.58.1 2.6.32-504.71.1 2.6.32-431.90.1 2.6.32-358.90.1' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136041]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136041]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136041]: Dependency check for required intel succeeded: result=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136033]: intel-06-4f-01: caveat is disabled in configuration Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: kernel version "5.14.0-266.rt14.266.el9.x86_64" failed early load check for "intel-06-4f-01", skipping Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136060]: intel-06-55-04: model 'GenuineIntel 06-55-04', path ' intel-ucode/06-55-04', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136065]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136065]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136065]: Dependency check for required intel succeeded: result=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: intel-06-55-04: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04" to fw_dir variable Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136083]: intel-06-5e-03: model 'GenuineIntel 06-5e-03', path ' intel-ucode/06-5e-03', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136088]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136088]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136088]: Dependency check for required intel succeeded: result=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: intel-06-5e-03: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03" to fw_dir variable Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136106]: intel-06-8c-01: model 'GenuineIntel 06-8c-01', path ' intel-ucode/06-8c-01', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136111]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136111]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136111]: Dependency check for required intel succeeded: result=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: intel-06-8c-01: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01" to fw_dir variable Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136129]: intel-06-8e-9e-0x-0xca: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136134]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136134]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136134]: Dependency check for required intel succeeded: result=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136129]: intel-06-8e-9e-0x-0xca: caveat is disabled in configuration Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: kernel version "5.14.0-266.rt14.266.el9.x86_64" failed early load check for "intel-06-8e-9e-0x-0xca", skipping Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136149]: intel-06-8e-9e-0x-dell: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136154]: Dependency check for required intel: calling check_caveat 'intel' '1' match_model=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136154]: intel: model '', path ' intel-ucode/*', kvers '' Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[136154]: Dependency check for required intel succeeded: result=0 Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: intel-06-8e-9e-0x-dell: caveats check for kernel version "5.14.0-266.rt14.266.el9.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell" to fw_dir variable Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell /usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04 /usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07 /usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates/5.14.0-266.rt14.266.el9.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-266.rt14.266.el9.x86_64 /lib/firmware" Feb 23 17:48:20 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including module: shutdown *** Feb 23 17:48:21 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Including modules done *** Feb 23 17:48:21 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Installing kernel module dependencies *** Feb 23 17:48:23 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Installing kernel module dependencies done *** Feb 23 17:48:23 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Resolving executable dependencies *** Feb 23 17:48:25 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00652|connmgr|INFO|br-ex<->unix#2273: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Resolving executable dependencies done *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Hardlinking files *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[138411]: dracut: Mode: real Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[138411]: dracut: Files: 2200 Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[138411]: dracut: Linked: 6 files Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[138411]: dracut: Compared: 0 xattrs Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[138411]: dracut: Compared: 606 files Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[138411]: dracut: Saved: 1.05 MiB Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[138411]: dracut: Duration: 0.012941 seconds Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Hardlinking files done *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Generating early-microcode cpio image *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Constructing AuthenticAMD.bin *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Constructing GenuineIntel.bin *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Store current command line parameters *** Feb 23 17:48:25 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Creating image file '/tmp/initramfs.img' *** Feb 23 17:48:40 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00653|connmgr|INFO|br-ex<->unix#2282: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:48:55 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00654|connmgr|INFO|br-ex<->unix#2286: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:49:09 ip-10-0-136-68 rpm-ostree[132673]: dracut: *** Creating initramfs image file '/tmp/initramfs.img' done *** Feb 23 17:49:09 ip-10-0-136-68 rpm-ostree[83055]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:49:10 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00655|connmgr|INFO|br-ex<->unix#2295: 2 flow_mods in the last 0 s (2 adds) Feb 23 17:49:10 ip-10-0-136-68 rpm-ostree[83055]: Wrote commit: b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab; New objects: meta:5 content:1 totaling 48.6 MB) Feb 23 17:49:11 ip-10-0-136-68 rpm-ostree[83055]: note: Deploying commit b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab which contains content in /var/lib that will be ignored. Feb 23 17:49:12 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:49:12 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:49:12 ip-10-0-136-68 rpm-ostree[83055]: Created new deployment /ostree/deploy/rhcos/deploy/b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab.0 Feb 23 17:49:12 ip-10-0-136-68 rpm-ostree[83055]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:49:12 ip-10-0-136-68 rpm-ostree[83055]: Pruned container image layers: 0 Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: Txn KernelArgs on /org/projectatomic/rpmostree1/rhcos successful Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: Unlocked sysroot Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: Process [pid: 115881 uid: 0 unit: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope] disconnected from transaction progress Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: client(id:machine-config-operator dbus:1.641 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) vanished; remaining=0 Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: In idle state; will auto-exit in 62 seconds Feb 23 17:49:13 ip-10-0-136-68 root[148048]: machine-config-daemon[79932]: Initiating switch from kernel realtime to realtime Feb 23 17:49:13 ip-10-0-136-68 root[148049]: machine-config-daemon[79932]: Updating rt-kernel packages on host: ["update"] Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: client(id:machine-config-operator dbus:1.642 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) added; new total=1 Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: Locked sysroot Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: Initiated txn Upgrade for client(id:machine-config-operator dbus:1.642 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0): /org/projectatomic/rpmostree1/rhcos Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: Process [pid: 148050 uid: 0 unit: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope] connected to transaction progress Feb 23 17:49:13 ip-10-0-136-68 rpm-ostree[83055]: Fetching ostree-unverified-image:docker://registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:171f6809871ab783c4c8996143a0466892bce38153750a4d2d61125d943cdff5 Feb 23 17:49:14 ip-10-0-136-68 rpm-ostree[83055]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:49:15 ip-10-0-136-68 rpm-ostree[148083]: warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend. Feb 23 17:49:16 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00656|connmgr|INFO|br-int<->unix#1484: 105 flow_mods in the 31 s starting 53 s ago (33 adds, 24 deletes, 48 modifications) Feb 23 17:49:17 ip-10-0-136-68 rpm-ostree[83055]: Preparing pkg txn; enabled repos: ['coreos-extensions'] solvables: 58 Feb 23 17:49:17 ip-10-0-136-68 rpm-ostree[83055]: Txn Upgrade on /org/projectatomic/rpmostree1/rhcos successful Feb 23 17:49:17 ip-10-0-136-68 rpm-ostree[83055]: Unlocked sysroot Feb 23 17:49:17 ip-10-0-136-68 rpm-ostree[83055]: Process [pid: 148050 uid: 0 unit: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope] disconnected from transaction progress Feb 23 17:49:17 ip-10-0-136-68 rpm-ostree[83055]: failed to query container image base metadata: Missing base image ref Feb 23 17:49:17 ip-10-0-136-68 rpm-ostree[83055]: client(id:machine-config-operator dbus:1.642 unit:crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope uid:0) vanished; remaining=0 Feb 23 17:49:17 ip-10-0-136-68 rpm-ostree[83055]: In idle state; will auto-exit in 64 seconds Feb 23 17:49:17 ip-10-0-136-68 logger[148100]: rendered-worker-1e56871b9de773bcdc692bfcd148a34a Feb 23 17:49:17 ip-10-0-136-68 root[148102]: machine-config-daemon[79932]: Rebooting node Feb 23 17:49:17 ip-10-0-136-68 root[148103]: machine-config-daemon[79932]: initiating reboot: Node will reboot into config rendered-worker-1e56871b9de773bcdc692bfcd148a34a Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Started machine-config-daemon: Node will reboot into config rendered-worker-1e56871b9de773bcdc692bfcd148a34a. Feb 23 17:49:17 ip-10-0-136-68 root[148106]: machine-config-daemon[79932]: reboot successful Feb 23 17:49:17 ip-10-0-136-68 systemd-logind[1014]: System is rebooting. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: machine-config-daemon-reboot.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped machine-config-daemon: Node will reboot into config rendered-worker-1e56871b9de773bcdc692bfcd148a34a. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: machine-config-daemon-reboot.service: Consumed 8ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: coreos-update-ca-trust.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Run update-ca-trust. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: coreos-update-ca-trust.service: Consumed 0 CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4. Feb 23 17:49:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:49:17.302158 2112 plugin_watcher.go:215] "Removing socket path from desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target Timers. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-clean.timer: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Daily Cleanup of Temporary Directories. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping NFS status monitor for NFSv2/3 locking.... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping rpm-ostree System Management Daemon... Feb 23 17:49:17 ip-10-0-136-68 conmon[61997]: conmon 131dab6756898b0693be : container 62018 exited with status 143 Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4. Feb 23 17:49:17 ip-10-0-136-68 conmon[61997]: conmon 131dab6756898b0693be : stdio_input read failed Resource temporarily unavailable Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126. Feb 23 17:49:17 ip-10-0-136-68 conmon[61997]: conmon 131dab6756898b0693be : stdio_input read failed Resource temporarily unavailable Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Removed slice system-sshd\x2dkeygen.slice. Feb 23 17:49:17 ip-10-0-136-68 conmon[65260]: conmon 402127c227490abbf7a0 : container 65272 exited with status 2 Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: system-sshd\x2dkeygen.slice: Consumed 0 CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: unbound-anchor.timer: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped daily update of the root trust anchor for DNSSEC. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target Graphical Interface. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target RPC Port Mapper. Feb 23 17:49:17 ip-10-0-136-68 conmon[70209]: conmon a64d1297fb3fce5c4667 : container 70222 exited with status 2 Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping Restore /run/initramfs on shutdown... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: afterburn.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Afterburn (Metadata). Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: afterburn.service: Consumed 0 CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: logrotate.timer: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 conmon[57693]: conmon 75f18bca37f8e743da6c : container 57750 exited with status 2 Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Daily rotation of log files. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb. Feb 23 17:49:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:49:17.337447 2112 plugin_watcher.go:215] "Removing socket path from desired state cache" path="/var/lib/kubelet/plugins_registry/csi.sharedresource.openshift.io-reg.sock" Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: lvm2-lvmpolld.socket: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Closed LVM2 poll daemon socket. Feb 23 17:49:17 ip-10-0-136-68 conmon[60169]: conmon 7a5b3c6af2511fc3bd1c : container 60182 exited with status 143 Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: lvm2-lvmpolld.socket: Consumed 0 CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target Remote Encrypted Volumes. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping libcontainer container 63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target Multi-User System. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: console-login-helper-messages-gensnippet-ssh-keys.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 chronyd[912]: chronyd exiting Feb 23 17:49:17 ip-10-0-136-68 kubenswrapper[2112]: I0223 17:49:17.362214 2112 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Generate SSH keys snippet for display via console-login-helper-messages. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: console-login-helper-messages-gensnippet-ssh-keys.service: Consumed 0 CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target Synchronize afterburn-sshkeys@.service template instances. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping NTP client/server... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target Login Prompts. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping Serial Getty on ttyS0... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping Getty on tty1... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping Login Service... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping OpenSSH server daemon... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping Kubernetes Kubelet... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping irqbalance daemon... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping Authorization Manager... Feb 23 17:49:17 ip-10-0-136-68 sshd[1151]: Received signal 15; terminating. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: irqbalance.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped irqbalance daemon. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: irqbalance.service: Consumed 105ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: sshd.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped OpenSSH server daemon. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: sshd.service: Consumed 10ms CPU time Feb 23 17:49:17 ip-10-0-136-68 conmon[71791]: conmon b375a0312757dbb9b613 : container 71818 exited with status 143 Feb 23 17:49:17 ip-10-0-136-68 conmon[67800]: conmon 286d73ee6434ff40a3d7 : container 67812 exited with status 143 Feb 23 17:49:17 ip-10-0-136-68 conmon[65481]: conmon edecf6e098beccc213a4 : container 65493 exited with status 2 Feb 23 17:49:17 ip-10-0-136-68 conmon[71389]: conmon a26ea61f8758607a5749 : container 71401 exited with status 143 Feb 23 17:49:17 ip-10-0-136-68 conmon[79968]: conmon 9cea77039ffc8d9c4364 : container 79989 exited with status 143 Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: getty@tty1.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Getty on tty1. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: getty@tty1.service: Consumed 8.877s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: systemd-logind.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Login Service. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: systemd-logind.service: Consumed 263ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: serial-getty@ttyS0.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Serial Getty on ttyS0. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: serial-getty@ttyS0.service: Consumed 1.362s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: rpc-statd.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped NFS status monitor for NFSv2/3 locking.. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: rpc-statd.service: Consumed 27ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: kubelet.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Kubernetes Kubelet. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: kubelet.service: Consumed 4min 13.138s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: polkit.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Authorization Manager. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: polkit.service: Consumed 182ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped rpm-ostree System Management Daemon. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Consumed 9min 10.902s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: dracut-shutdown.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Restore /run/initramfs on shutdown. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: dracut-shutdown.service: Consumed 2ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537.scope: Consumed 21.585s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075.scope: Consumed 16.199s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-9954ce136580565944733f8e22dcf8c686edec96413c6c4c5c8c32521ab25537.scope: Consumed 38ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a.scope: Consumed 227ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-80f0eadf9b6d882a2658a5d3eef859c17a99700d3bd79623a4b7b2c758f6d075.scope: Consumed 25ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d.scope: Consumed 792ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-402127c227490abbf7a0bbba8e9de5f35184e1dc73a7f98d49399a24df2e655a.scope: Consumed 22ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope: Consumed 6.429s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4.scope: Consumed 3.933s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce.scope: Consumed 24ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-7a5b3c6af2511fc3bd1c0e75f8f2e6a03ff01d4057f07085043b19c30d740de4.scope: Consumed 24ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394.scope: Consumed 28ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06.scope: Consumed 28ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb.scope: Consumed 25ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-75f18bca37f8e743da6c3022284c8822bdd2d851b5c090bd184e132e62bfbb06.scope: Consumed 205ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-42b96157d17f74f25f71d7357da12ff08910e955866411f5c7c62aebd3027126.scope: Consumed 39ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-50d3d89aba11348010102fd0ae0c1d0fb4a40d62816189af3b8f9448589c11bb.scope: Consumed 62ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4.scope: Consumed 23ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-f1bf5d592147da92da4879a3130c4232bf4edaaa3fc399ec80102a403045abd4.scope: Consumed 49ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-a64d1297fb3fce5c4667c2505d997293d31ba7182298bad8c7ecdca959d438ce.scope: Consumed 108ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-e0c9241ec89fc1a11b24fa1ff40655527985b880f5cd8342fbc45b95c532516d.scope: Consumed 28ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f.scope: Consumed 22ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557.scope: Consumed 1.482s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb.scope: Consumed 617ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18.scope: Consumed 25ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: chronyd.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped NTP client/server. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: chronyd.service: Consumed 135ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-b375a0312757dbb9b6138c20865d2ed306c19be2d9ecf842e7761f0c20fe5557.scope: Consumed 27ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c.scope: Consumed 236ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300.scope: Consumed 26ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-286d73ee6434ff40a3d7c8e8127cd932c84a30976f828a1f30b6cc61ca7cb77f.scope: Consumed 25ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-9cea77039ffc8d9c43645470962f53d22ee1af2d02223560bc1ed876ee04e300.scope: Consumed 268ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-0f9728bfc2501ec855679fbc064366a5231068ef656020ff8bd463f77e83a3eb.scope: Consumed 31ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179.scope: Consumed 24ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-a26ea61f8758607a5749086fdba4a993f907c5f0778f5ec24dfbd2e80679f179.scope: Consumed 373ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-conmon-007741281cc8da8036b590cfb863b8015dae086fa900c8cb587da3648dc8780c.scope: Consumed 23ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-edecf6e098beccc213a477b14d47a5a2bfd84277dbfd83838f39657dbd428f18.scope: Consumed 273ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394.scope: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: crio-131dab6756898b0693be374d713456c96e51c0780c357e88c0d78bb07ba2e394.scope: Consumed 2.190s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target Host and Network Name Lookups. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Removed slice system-serial\x2dgetty.slice. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: system-serial\x2dgetty.slice: Consumed 1.362s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Removed slice system-getty.slice. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: system-getty.slice: Consumed 8.877s CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping Permit User Sessions... Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target sshd-keygen.target. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: systemd-user-sessions.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Permit User Sessions. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: systemd-user-sessions.service: Consumed 8ms CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target User and Group Name Lookups. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped target Remote File Systems. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: coreos-ignition-write-issues.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped Create Ignition Status Issue Files. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: coreos-ignition-write-issues.service: Consumed 0 CPU time Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopping System Security Services Daemon... Feb 23 17:49:17 ip-10-0-136-68 sssd_nss[997]: Shutting down (status = 0) Feb 23 17:49:17 ip-10-0-136-68 sssd_be[976]: Shutting down (status = 0) Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: sssd.service: Succeeded. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: Stopped System Security Services Daemon. Feb 23 17:49:17 ip-10-0-136-68 systemd[1]: sssd.service: Consumed 309ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609.scope: Consumed 21ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: Stopped libcontainer container ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-ae697e1045bb95accb4fda4c43a3bb2271dd3e84715c0fa399e0749a41344609.scope: Consumed 268ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072.scope: Consumed 23ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: Stopped libcontainer container fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb.scope: Consumed 476ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-435d9e7433fbd6413891a09493dbbe7f71f622194bb9590523623dc94c45a072.scope: Consumed 256ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-fcb8b471274c0bcd2b4be8858b09ed1cba7b7ca1af8d09617c72271aba2604fb.scope: Consumed 24ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422.scope: Consumed 26ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-63b57363566114a13a277fe7e02f0b73760020e70f03d394ec8b0229ba369422.scope: Consumed 191ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-conmon-fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732.scope: Consumed 23ms CPU time Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732.scope: Succeeded. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: Stopped libcontainer container fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732. Feb 23 17:49:18 ip-10-0-136-68 systemd[1]: crio-fd68cf1fa283ceb74c541d1c987b44f858cdc0db2e4c735d347809c327dd6732.scope: Consumed 819ms CPU time Feb 23 17:49:37 ip-10-0-136-68 systemd[1]: crio-conmon-9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3.scope: Succeeded. Feb 23 17:49:37 ip-10-0-136-68 systemd[1]: crio-conmon-9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3.scope: Consumed 27ms CPU time Feb 23 17:49:37 ip-10-0-136-68 systemd[1]: crio-9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3.scope: Succeeded. Feb 23 17:49:37 ip-10-0-136-68 systemd[1]: Stopped libcontainer container 9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3. Feb 23 17:49:37 ip-10-0-136-68 systemd[1]: crio-9dbee011c01639cee1b040ab978b0011e0b77e60af6b347fd119d607386548a3.scope: Consumed 1.415s CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: crio-cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237.scope: Stopping timed out. Killing. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: crio-cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237.scope: Killing process 61826 (csi-driver-shar) with signal SIGKILL. Feb 23 17:49:47 ip-10-0-136-68 conmon[61814]: conmon cbe2fd6f73c4cff587d5 : container 61826 exited with status 137 Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: crio-conmon-cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237.scope: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: crio-conmon-cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237.scope: Consumed 27ms CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: crio-cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237.scope: Failed with result 'timeout'. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped libcontainer container cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: crio-cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237.scope: Consumed 497ms CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopping Container Runtime Interface for OCI (CRI-O)... Feb 23 17:49:47 ip-10-0-136-68 crio[2062]: time="2023-02-23 17:49:47.410266910Z" level=error msg="Failed to update container state for cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237: `/usr/bin/runc --root /run/runc --systemd-cgroup state cbe2fd6f73c4cff587d5e0418a0dc9544393d0a17e1ce1d722e172a858d79237` failed: : signal: terminated" Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: crio.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped Container Runtime Interface for OCI (CRI-O). Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: crio.service: Consumed 3min 35.748s CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: kubelet-auto-node-size.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped Dynamically sets the system reserved for the kubelet. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: kubelet-auto-node-size.service: Consumed 0 CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped target Network is Online. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped target Network. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: node-valid-hostname.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped Wait for a non-localhost hostname. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: node-valid-hostname.service: Consumed 0 CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: NetworkManager-wait-online.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped Network Manager Wait Online. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: NetworkManager-wait-online.service: Consumed 0 CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopping Network Manager... Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4924] caught SIGTERM, shutting down normally. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4938] device (ens5): releasing ovs interface ens5 Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4957] device (br-ex): state change: activated -> deactivating (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4965] dispatcher: (27) failed: Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4969] device (br-ex): state change: deactivating -> unmanaged (reason 'removed', sys-iface-state: 'managed') Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4977] device (ens5): state change: activated -> deactivating (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4983] dispatcher: (29) failed: Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4984] device (ens5): state change: deactivating -> unmanaged (reason 'removed', sys-iface-state: 'managed') Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4991] device (br-ex): state change: activated -> deactivating (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4998] dispatcher: (31) failed: Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4998] device (br-ex): state change: deactivating -> unmanaged (reason 'removed', sys-iface-state: 'managed') Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.4999] device (br-ex): releasing ovs interface br-ex Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00657|bridge|INFO|bridge br-ex: deleted interface br-ex on port 65534 Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.5021] dhcp4 (br-ex): canceled DHCP transaction Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.5022] dhcp4 (br-ex): activation: beginning transaction (timeout in 45 seconds) Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.5022] dhcp4 (br-ex): state changed no lease Feb 23 17:49:47 ip-10-0-136-68 kernel: device br-ex left promiscuous mode Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.7050] device (br-ex): state change: activated -> deactivating (reason 'unmanaged', sys-iface-state: 'managed') Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.7058] dispatcher: (33) failed: Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.7059] manager: NetworkManager state is now CONNECTED_LOCAL Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.7060] device (br-ex): state change: deactivating -> unmanaged (reason 'removed', sys-iface-state: 'managed') Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.7063] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.6' (uid=0 pid=1147 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.hostname1.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 dbus-daemon[903]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.nm-dispatcher.service': Refusing activation, D-Bus is shutting down. Feb 23 17:49:47 ip-10-0-136-68 NetworkManager[1147]: [1677174587.7120] exiting (success) Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: NetworkManager.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped Network Manager. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: NetworkManager.service: Consumed 756ms CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopping D-Bus System Message Bus... Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch... Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: openvswitch.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: openvswitch.service: Consumed 1ms CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: dbus.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped D-Bus System Message Bus. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: dbus.service: Consumed 1.654s CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch Forwarding Unit... Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00658|bridge|INFO|bridge br-ex: deleted interface ens5 on port 1 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00659|bridge|INFO|bridge br-ex: deleted interface patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int on port 2 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00660|ofproto_dpif_rid|ERR|recirc_id 13 left allocated when ofproto (br-ex) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00661|bridge|INFO|bridge br-int: deleted interface ovn-5a9c4f-0 on port 2 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00662|bridge|INFO|bridge br-int: deleted interface patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal on port 6 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00663|bridge|INFO|bridge br-int: deleted interface ovn-72cfee-0 on port 7 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00664|bridge|INFO|bridge br-int: deleted interface ovn-7dfb31-0 on port 1 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00665|bridge|INFO|bridge br-int: deleted interface ovn-k8s-mp0 on port 5 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00666|bridge|INFO|bridge br-int: deleted interface e35d890abd5d4b0 on port 31 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00667|bridge|INFO|bridge br-int: deleted interface 904f3beae60de67 on port 29 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00668|bridge|INFO|bridge br-int: deleted interface aa2f6c1cfe2015e on port 33 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00669|bridge|INFO|bridge br-int: deleted interface 13a3543931af50f on port 26 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00670|bridge|INFO|bridge br-int: deleted interface ovn-061a07-0 on port 3 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00671|bridge|INFO|bridge br-int: deleted interface ovn-b823f7-0 on port 4 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00672|bridge|INFO|bridge br-int: deleted interface 9ac9106efc7becf on port 35 Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00673|bridge|INFO|bridge br-int: deleted interface br-int on port 65534 Feb 23 17:49:47 ip-10-0-136-68 ovs-ctl[148581]: Exiting ovs-vswitchd (1105). Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00674|ofproto_dpif_rid|ERR|recirc_id 14632 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00675|ofproto_dpif_rid|ERR|recirc_id 13831 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00676|ofproto_dpif_rid|ERR|recirc_id 10169 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00677|ofproto_dpif_rid|ERR|recirc_id 14637 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00678|ofproto_dpif_rid|ERR|recirc_id 10158 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00679|ofproto_dpif_rid|ERR|recirc_id 8745 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00680|ofproto_dpif_rid|ERR|recirc_id 14630 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00681|ofproto_dpif_rid|ERR|recirc_id 14634 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00682|ofproto_dpif_rid|ERR|recirc_id 14633 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00683|ofproto_dpif_rid|ERR|recirc_id 14636 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00684|ofproto_dpif_rid|ERR|recirc_id 8747 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00685|ofproto_dpif_rid|ERR|recirc_id 14631 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00686|ofproto_dpif_rid|ERR|recirc_id 10157 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00687|ofproto_dpif_rid|ERR|recirc_id 14624 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00688|ofproto_dpif_rid|ERR|recirc_id 14635 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 ovs-vswitchd[1105]: ovs|00689|ofproto_dpif_rid|ERR|recirc_id 14638 left allocated when ofproto (br-int) is destructed Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: ovs-vswitchd.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Forwarding Unit. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: ovs-vswitchd.service: Consumed 43.986s CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: ovs-delete-transient-ports.service: Succeeded. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Delete Transient Ports. Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: ovs-delete-transient-ports.service: Consumed 0 CPU time Feb 23 17:49:47 ip-10-0-136-68 systemd[1]: Stopping Open vSwitch Database Unit... Feb 23 17:49:47 ip-10-0-136-68 ovs-ctl[148601]: Exiting ovsdb-server (1025). Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: ovsdb-server.service: Succeeded. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Stopped Open vSwitch Database Unit. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: ovsdb-server.service: Consumed 2.678s CPU time Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Stopped target Basic System. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Stopping OSTree Finalize Staged Deployment... Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Stopped target Paths. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: console-login-helper-messages-issuegen.path: Succeeded. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Stopped Monitor console-login-helper-messages runtime issue snippets directory for changes. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Stopped target Sockets. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: bootupd.socket: Succeeded. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Closed bootupd.socket. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: bootupd.socket: Consumed 0 CPU time Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: dbus.socket: Succeeded. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Closed D-Bus System Message Bus Socket. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: dbus.socket: Consumed 0 CPU time Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Stopped target Slices. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Removed slice User and Session Slice. Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: user.slice: Consumed 0 CPU time Feb 23 17:49:48 ip-10-0-136-68 systemd[1]: Stopped target Network (Pre). Feb 23 17:49:48 ip-10-0-136-68 ostree[148621]: Finalizing staged deployment Feb 23 17:49:48 ip-10-0-136-68 kernel: EXT4-fs (nvme0n1p3): re-mounted. Opts: Feb 23 17:49:50 ip-10-0-136-68 ostree[148621]: Copying /etc changes: 14 modified, 0 removed, 207 added Feb 23 17:49:50 ip-10-0-136-68 ostree[148621]: Copying /etc changes: 14 modified, 0 removed, 207 added Feb 23 17:49:50 ip-10-0-136-68 ostree[148627]: The --rebuild-if-modules-changed option is deprecated. Use --refresh instead. Feb 23 17:49:53 ip-10-0-136-68 ostree[148621]: Bootloader updated; bootconfig swap: yes; bootversion: boot.0.1, deployment count change: 1 Feb 23 17:49:53 ip-10-0-136-68 ostree[148621]: Bootloader updated; bootconfig swap: yes; bootversion: boot.0.1, deployment count change: 1 Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped OSTree Finalize Staged Deployment. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.service: Consumed 2.068s CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: ostree-finalize-staged.path: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped OSTree Monitor Staged Deployment. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped target System Initialization. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-sysctl.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Apply Kernel Variables. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-sysctl.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-modules-load.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Load Kernel Modules. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-modules-load.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-update-done.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Update is Completed. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-update-done.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: ldconfig.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Rebuild Dynamic Linker Cache. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: ldconfig.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-hwdb-update.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Rebuild Hardware Database. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-hwdb-update.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: coreos-printk-quiet.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped CoreOS: Set printk To Level 4 (warn). Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: coreos-printk-quiet.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped target Local Encrypted Volumes (Pre). Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-ask-password-console.path: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-ask-password-wall.path: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Forward Password Requests to Wall Directory Watch. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopping Update UTMP about System Boot/Shutdown... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-journal-catalog-update.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Rebuild Journal Catalog. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-journal-catalog-update.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopping Load/Save Random Seed... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-update-utmp.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Update UTMP about System Boot/Shutdown. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-update-utmp.service: Consumed 4ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopping Security Auditing Service... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-random-seed.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Load/Save Random Seed. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-random-seed.service: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 auditd[867]: The audit daemon is exiting. Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1305 audit(1677174593.185:175): op=set audit_pid=0 old=867 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: auditd.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Security Auditing Service. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: auditd.service: Consumed 43ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/5a33cf07b3a79ac11a8ca210ad41be3decbe8a6849232adc61d39933929ba053/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/d7f3e4621eee43ec986b2534f04945a874aa4bea76c903632a5e1de6f5023703/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1130 audit(1677174593.196:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1131 audit(1677174593.196:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1130 audit(1677174593.198:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1131 audit(1677174593.198:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7f45169c12580d717d1776e332e3dbc5306806b9532f2cf6da904bde7d47ac08/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/b7e04edc-986e-48bf-8822-18763de96831... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/0fcc1003725e7dc3c435e97e8fed76c3f18bb844c30258f0e49a5cc46e88fe1b/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/6d71b68c91808284b56b90cd80a70babf2af72f7da30d8687bdb6b91cca7940e/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/25512beccfcdc3b1a633a6ace2afa85967ee9b584166c33d4d2212cdcf695573/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7c387b69f2cfa61d752fe16fd07b55b79c44c6f2ab927e83078448e34cce6bd7/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/cdfe283fac828e2748dc43a9e7f5ff3fb40501433c2c8609d3cc2d173b70750e/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/93f0c5c3-9f22-4b93-a925-f621ed5e18e7/volumes/kubernetes.io~projected/kube-api-access-mgsp8... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/4383980e-ee96-45fd-8b0e-55d1e1a5408f... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/31d0faabc52cbc44d8d2f37b7e222e1ef19f6b30c137f717b010db8d13edd131/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting CoreOS Dynamic Mount for /boot... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/4173af40-9a8f-40dd-9e34-587a95d5903e... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/3964388ae93eca6225d62e05deebce0cebef777a93e4c3659fe050d32721d268/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/3... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/24833329af9d33c55efc4797f375feb25d5922aa128d320d036b2b36fdfe13fb/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/bd2da6fb-b383-40fe-a3ad-b6436a02985b/volumes/kubernetes.io~projected/kube-api-access-cxsmb... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/2ca3da91-c233-4b87-902c-88de41d8c9db... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/28f1f9d3323e82fe4f7dd75119e256d506b5684f429465e7d87c5dba5e78ae38/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/e538f9e0910b3c520b8b01e828ff552bf5d13e2c95631c8ada0f4aa9b57a4c76/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/2... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/67115877-1e45-4be1-ab56-dfcafa2c613e... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77/volumes/kubernetes.io~secret/shared-resource-csi-driver-node-metrics-serving-cert... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/757b7544-c265-49ce-a1f0-22cca4bf919f/volumes/kubernetes.io~secret/metrics-tls... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/3e3e7655-5c60-4995-9a23-b32843026a6e/volumes/kubernetes.io~secret/node-exporter-tls... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/b7e04edc-986e-48bf-8822-18763de96831... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/0f214f089d8cc895947c72f46863360102cef81e472d325427a4809755af1f61/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/4a9336240651a0a8e9e2aa8a0a5a2378df10b49506f19178fd8826ff69f47ae4/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/93f0c5c3-9f22-4b93-a925-f621ed5e18e7/volumes/kubernetes.io~secret/metrics-certs... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/3e3e7655-5c60-4995-9a23-b32843026a6e/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/07f577410927212ed2749dad2143baef8336cee380444696241ffb3477ded814/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7442d3041d20149841bd69f497f0379892affe34a102098f24e5f0ec4fcd8c75/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/2ca3da91-c233-4b87-902c-88de41d8c9db... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/0976617f-18ed-4a73-a7d8-ac54cf69ab93/volumes/kubernetes.io~projected/kube-api-access-r6xs2... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/68c19172-c7c8-4a6b-880c-e79152a16a50... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/032b2b6e-e8bd-477c-be7b-99b07a9ca111... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /etc... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/1... Feb 23 17:49:53 ip-10-0-136-68 umount[148745]: umount: /etc: target is busy. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/68c19172-c7c8-4a6b-880c-e79152a16a50... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/4173af40-9a8f-40dd-9e34-587a95d5903e... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/3e3e7655-5c60-4995-9a23-b32843026a6e/volumes/kubernetes.io~projected/kube-api-access-p2x89... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/4383980e-ee96-45fd-8b0e-55d1e1a5408f... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/68ec7f1aae1140517d5d7d6a0cdba38159c6f0f7496f602d42b1dfb5e88db417/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/652c23fc-94b7-440c-89cf-bc2999359623... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/92a1c1788a499b998b31f0f526f176eb2e52ecce75221c71b4c074b348e5a677/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/ff7777c7-a1dc-413e-8da1-c4ba07527037/volumes/kubernetes.io~secret/proxy-tls... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77/volumes/kubernetes.io~projected/kube-api-access-bs8fq... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/70ad915a829732075b36bb02674c21d0ea63d492660ea0f38374a6f1400102c0/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/9eb4a126-482c-4458-b901-e2e7a15dfd93/volumes/kubernetes.io~projected/kube-api-access-b4fbl... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting Temporary Directory (/tmp)... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/4d082a50-4c8a-4970-bade-95bf44983bd3... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/ff7777c7-a1dc-413e-8da1-c4ba07527037/volumes/kubernetes.io~projected/kube-api-access-scnpz... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/4bfac8b1af556bdac8bbaf06ff68c34fe8940661b6fa1ae20110aa34bef10e0c/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/e00be3053de2076113871a96fc08c21607c9d30685a883b9e7c21bd67f1dd6af/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/8c0d70c1a4c567b783f3294918316e2610fa4bb6320bba5ebd9a7e16a726bb74/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/containers/storage/overlay-containers/5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948/userdata/shm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/032b2b6e-e8bd-477c-be7b-99b07a9ca111... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/878b4be7-9dda-4a34-a051-3b53bc09a6dc... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/ec7265f344718951a2377c7a972d55b2a1cc9e6d71dec99ca7a7cdcc72fca39a/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/652c23fc-94b7-440c-89cf-bc2999359623... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/5... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/e8af7f7ca7adf96b62ff7718f8ba9039469c70a8aeb376394e181fef4f0bea33/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/35a798577b4d33152c7fa3d26a30c0d615b6c74062327e54085b58e45f04d581/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/68c19172-c7c8-4a6b-880c-e79152a16a50... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/7da00340-9715-48ac-b144-4705de276bf5/volumes/kubernetes.io~secret/ovn-cert... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/4383980e-ee96-45fd-8b0e-55d1e1a5408f... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/032b2b6e-e8bd-477c-be7b-99b07a9ca111... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/7f25c5a9-b9c7-4220-a892-362cf6b33878/volumes/kubernetes.io~projected/kube-api-access-22vqh... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/0268b68d-53b2-454a-a03b-37bd38d269bc/volumes/kubernetes.io~projected/kube-api-access-qvgqb... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/f9245b10e0b7c5bb3b3dddfd21b6e04c5b888405df5b72bd15c45c5bcbdd5c79/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/7da00340-9715-48ac-b144-4705de276bf5/volumes/kubernetes.io~secret/ovn-node-metrics-cert... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/51964eb18e6022941b975967a69f22726a338f987432578e85470ce3bbd8520c/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/082cb0c7fea0c6d0cc826b0bf27632416e177d08cafd122e16754f87bbdaf497/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/652c23fc-94b7-440c-89cf-bc2999359623... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/e64acdcf723c5cfdef5178991b4a986acaafa6c7cad791601039d9ff2e098319/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/ff7777c7-a1dc-413e-8da1-c4ba07527037/volumes/kubernetes.io~secret/cookie-secret... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/7da00340-9715-48ac-b144-4705de276bf5/volumes/kubernetes.io~projected/kube-api-access-p9564... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/11b370a4897f403b2c91efb4db4171bc1e24e229865f7db2274785ed667e41c5/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/4f5255453886cb376007a1a813df04b2cb4a0db60f186bfd9be387a4ccaaeb20/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/ef10a721dc4af6cb8bf5d214a02f55da275e74b6cff1f125bb09c3ca7ecdb8bf/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/314c032c9a03088cca1e4c8e791e392952dad34af8aeb20f3f7a7cae615408d0/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/4173af40-9a8f-40dd-9e34-587a95d5903e... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/2ca3da91-c233-4b87-902c-88de41d8c9db... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/f7de530ab1cbc8bd6261697fc3ad7764454b1b28bbe5b65aecbcba3461b1dcd9/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volumes/kubernetes.io~projected/kube-api-access-tntbd... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/13348bbd4163d6d47dd340a767cb6054ce6e53f74e432c41d36d831c6128365a/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/1134f8bb68d50a8677494fa5e41eaf9ad1ebdcb94153d0230cf5b75a8a67cd3b/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/69dd2878-f624-4440-97e8-7ece7e4437b1... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/155ce189bbf3f0b6f9b74c22404f2cf7b99dd8c488c5b73dd442b30c9b4d055c/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/f4dbeb9b22c8b9aeac27867e75136d58f9fad4b07333585b2f80e945be857ef4/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/67115877-1e45-4be1-ab56-dfcafa2c613e... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/e0aa36fed1afa7be489c71ab5015e989ce0777ecb5588a4a0c48a3c52cbe5e1c/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/032748921edd35d4e60c50f02120c6c0cef605d91ac2c90181c8fcdee1ab5bf4/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/e0abac93-3e79-4a32-8375-5ef1a2e59687/volumes/kubernetes.io~projected/kube-api-access-t77mc... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/7e0d6454bb22460756b397db5fe9879fb86db21b4abf902058188eb118f32ed3/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/4d082a50-4c8a-4970-bade-95bf44983bd3... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/757b7544-c265-49ce-a1f0-22cca4bf919f/volumes/kubernetes.io~projected/kube-api-access-4z9qm... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/3cfdf0cde1f31aa89f98706599fc3d0b6658b6ad3a92ee0b72a0c6948e31bbcb/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/4... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/c517f388a13f54195edcc58e570413397d849c1bc7ce5854e53248ad91f4a2c7/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/69dd2878-f624-4440-97e8-7ece7e4437b1... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/eca24fddaa1156fd3c6922c8c6070a3affe73a52e0ac62860c77cba51fee37f9/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/kubelet/pods/adcfa5f5-1c6b-415e-8e69-b72e137820e1/volumes/kubernetes.io~projected/kube-api-access-kf689... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/netns/878b4be7-9dda-4a34-a051-3b53bc09a6dc... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/878b4be7-9dda-4a34-a051-3b53bc09a6dc... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/4d082a50-4c8a-4970-bade-95bf44983bd3... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/13c79e72eecd069f9d9612b393f109eedd52ca47a714d3a46a3e5c4710a975d4/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/67115877-1e45-4be1-ab56-dfcafa2c613e... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/ipcns/b7e04edc-986e-48bf-8822-18763de96831... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /run/utsns/69dd2878-f624-4440-97e8-7ece7e4437b1... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay/17467ed7ee1b229ab5ad4401a84eb29dd5bf65ec449196ff000cdd5797aa0c34/merged... Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-journal-flush.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped Flush Journal to Persistent Storage. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: systemd-journal-flush.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: ostree-remount.service: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Stopped OSTree Remount OS/ Bind Mounts. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: ostree-remount.service: Consumed 0 CPU time Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1130 audit(1677174593.724:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1131 audit(1677174593.724:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1130 audit(1677174593.726:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:53 ip-10-0-136-68 kernel: audit: type=1131 audit(1677174593.726:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5a33cf07b3a79ac11a8ca210ad41be3decbe8a6849232adc61d39933929ba053-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/5a33cf07b3a79ac11a8ca210ad41be3decbe8a6849232adc61d39933929ba053/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5a33cf07b3a79ac11a8ca210ad41be3decbe8a6849232adc61d39933929ba053-merged.mount: Consumed 8ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d7f3e4621eee43ec986b2534f04945a874aa4bea76c903632a5e1de6f5023703-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/d7f3e4621eee43ec986b2534f04945a874aa4bea76c903632a5e1de6f5023703/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d7f3e4621eee43ec986b2534f04945a874aa4bea76c903632a5e1de6f5023703-merged.mount: Consumed 8ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7f45169c12580d717d1776e332e3dbc5306806b9532f2cf6da904bde7d47ac08-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7f45169c12580d717d1776e332e3dbc5306806b9532f2cf6da904bde7d47ac08/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7f45169c12580d717d1776e332e3dbc5306806b9532f2cf6da904bde7d47ac08-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-b7e04edc\x2d986e\x2d48bf\x2d8822\x2d18763de96831.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/b7e04edc-986e-48bf-8822-18763de96831. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-b7e04edc\x2d986e\x2d48bf\x2d8822\x2d18763de96831.mount: Consumed 4ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0fcc1003725e7dc3c435e97e8fed76c3f18bb844c30258f0e49a5cc46e88fe1b-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/0fcc1003725e7dc3c435e97e8fed76c3f18bb844c30258f0e49a5cc46e88fe1b/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0fcc1003725e7dc3c435e97e8fed76c3f18bb844c30258f0e49a5cc46e88fe1b-merged.mount: Consumed 4ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6d71b68c91808284b56b90cd80a70babf2af72f7da30d8687bdb6b91cca7940e-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/6d71b68c91808284b56b90cd80a70babf2af72f7da30d8687bdb6b91cca7940e/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6d71b68c91808284b56b90cd80a70babf2af72f7da30d8687bdb6b91cca7940e-merged.mount: Consumed 7ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5-userdata-shm.mount: Consumed 4ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-25512beccfcdc3b1a633a6ace2afa85967ee9b584166c33d4d2212cdcf695573-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/25512beccfcdc3b1a633a6ace2afa85967ee9b584166c33d4d2212cdcf695573/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-25512beccfcdc3b1a633a6ace2afa85967ee9b584166c33d4d2212cdcf695573-merged.mount: Consumed 8ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7c387b69f2cfa61d752fe16fd07b55b79c44c6f2ab927e83078448e34cce6bd7-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7c387b69f2cfa61d752fe16fd07b55b79c44c6f2ab927e83078448e34cce6bd7/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7c387b69f2cfa61d752fe16fd07b55b79c44c6f2ab927e83078448e34cce6bd7-merged.mount: Consumed 4ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cdfe283fac828e2748dc43a9e7f5ff3fb40501433c2c8609d3cc2d173b70750e-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/cdfe283fac828e2748dc43a9e7f5ff3fb40501433c2c8609d3cc2d173b70750e/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cdfe283fac828e2748dc43a9e7f5ff3fb40501433c2c8609d3cc2d173b70750e-merged.mount: Consumed 7ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-93f0c5c3\x2d9f22\x2d4b93\x2da925\x2df621ed5e18e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmgsp8.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/93f0c5c3-9f22-4b93-a925-f621ed5e18e7/volumes/kubernetes.io~projected/kube-api-access-mgsp8. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-93f0c5c3\x2d9f22\x2d4b93\x2da925\x2df621ed5e18e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmgsp8.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-4383980e\x2dee96\x2d45fd\x2d8b0e\x2d55d1e1a5408f.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/4383980e-ee96-45fd-8b0e-55d1e1a5408f. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-4383980e\x2dee96\x2d45fd\x2d8b0e\x2d55d1e1a5408f.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-31d0faabc52cbc44d8d2f37b7e222e1ef19f6b30c137f717b010db8d13edd131-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/31d0faabc52cbc44d8d2f37b7e222e1ef19f6b30c137f717b010db8d13edd131/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-31d0faabc52cbc44d8d2f37b7e222e1ef19f6b30c137f717b010db8d13edd131-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-10b629ec\x2d6fd9\x2d4a7a\x2dbdf3\x2d191b484df0a5.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-10b629ec\x2d6fd9\x2d4a7a\x2dbdf3\x2d191b484df0a5.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: boot.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted CoreOS Dynamic Mount for /boot. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: boot.mount: Consumed 57ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-4173af40\x2d9a8f\x2d40dd\x2d9e34\x2d587a95d5903e.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/4173af40-9a8f-40dd-9e34-587a95d5903e. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-4173af40\x2d9a8f\x2d40dd\x2d9e34\x2d587a95d5903e.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3964388ae93eca6225d62e05deebce0cebef777a93e4c3659fe050d32721d268-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/3964388ae93eca6225d62e05deebce0cebef777a93e4c3659fe050d32721d268/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3964388ae93eca6225d62e05deebce0cebef777a93e4c3659fe050d32721d268-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-3.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/3. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-3.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-24833329af9d33c55efc4797f375feb25d5922aa128d320d036b2b36fdfe13fb-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/24833329af9d33c55efc4797f375feb25d5922aa128d320d036b2b36fdfe13fb/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-24833329af9d33c55efc4797f375feb25d5922aa128d320d036b2b36fdfe13fb-merged.mount: Consumed 4ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-bd2da6fb\x2db383\x2d40fe\x2da3ad\x2db6436a02985b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcxsmb.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/bd2da6fb-b383-40fe-a3ad-b6436a02985b/volumes/kubernetes.io~projected/kube-api-access-cxsmb. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-bd2da6fb\x2db383\x2d40fe\x2da3ad\x2db6436a02985b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcxsmb.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-2ca3da91\x2dc233\x2d4b87\x2d902c\x2d88de41d8c9db.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/2ca3da91-c233-4b87-902c-88de41d8c9db. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-2ca3da91\x2dc233\x2d4b87\x2d902c\x2d88de41d8c9db.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-28f1f9d3323e82fe4f7dd75119e256d506b5684f429465e7d87c5dba5e78ae38-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/28f1f9d3323e82fe4f7dd75119e256d506b5684f429465e7d87c5dba5e78ae38/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-28f1f9d3323e82fe4f7dd75119e256d506b5684f429465e7d87c5dba5e78ae38-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e538f9e0910b3c520b8b01e828ff552bf5d13e2c95631c8ada0f4aa9b57a4c76-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/e538f9e0910b3c520b8b01e828ff552bf5d13e2c95631c8ada0f4aa9b57a4c76/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e538f9e0910b3c520b8b01e828ff552bf5d13e2c95631c8ada0f4aa9b57a4c76-merged.mount: Consumed 6ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-2.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/2. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-2.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-67115877\x2d1e45\x2d4be1\x2dab56\x2ddfcafa2c613e.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/67115877-1e45-4be1-ab56-dfcafa2c613e. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-67115877\x2d1e45\x2d4be1\x2dab56\x2ddfcafa2c613e.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-46cf33e4\x2dfc3b\x2d4f7a\x2db0ab\x2ddc2cbc5a5e77-volumes-kubernetes.io\x7esecret-shared\x2dresource\x2dcsi\x2ddriver\x2dnode\x2dmetrics\x2dserving\x2dcert.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77/volumes/kubernetes.io~secret/shared-resource-csi-driver-node-metrics-serving-cert. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-46cf33e4\x2dfc3b\x2d4f7a\x2db0ab\x2ddc2cbc5a5e77-volumes-kubernetes.io\x7esecret-shared\x2dresource\x2dcsi\x2ddriver\x2dnode\x2dmetrics\x2dserving\x2dcert.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-757b7544\x2dc265\x2d49ce\x2da1f0\x2d22cca4bf919f-volumes-kubernetes.io\x7esecret-metrics\x2dtls.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/757b7544-c265-49ce-a1f0-22cca4bf919f/volumes/kubernetes.io~secret/metrics-tls. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-757b7544\x2dc265\x2d49ce\x2da1f0\x2d22cca4bf919f-volumes-kubernetes.io\x7esecret-metrics\x2dtls.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3e3e7655\x2d5c60\x2d4995\x2d9a23\x2db32843026a6e-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dtls.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/3e3e7655-5c60-4995-9a23-b32843026a6e/volumes/kubernetes.io~secret/node-exporter-tls. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3e3e7655\x2d5c60\x2d4995\x2d9a23\x2db32843026a6e-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dtls.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-b7e04edc\x2d986e\x2d48bf\x2d8822\x2d18763de96831.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/b7e04edc-986e-48bf-8822-18763de96831. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-b7e04edc\x2d986e\x2d48bf\x2d8822\x2d18763de96831.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0f214f089d8cc895947c72f46863360102cef81e472d325427a4809755af1f61-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/0f214f089d8cc895947c72f46863360102cef81e472d325427a4809755af1f61/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-0f214f089d8cc895947c72f46863360102cef81e472d325427a4809755af1f61-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4a9336240651a0a8e9e2aa8a0a5a2378df10b49506f19178fd8826ff69f47ae4-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/4a9336240651a0a8e9e2aa8a0a5a2378df10b49506f19178fd8826ff69f47ae4/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4a9336240651a0a8e9e2aa8a0a5a2378df10b49506f19178fd8826ff69f47ae4-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-f547dd21\x2d4ba2\x2d4f6b\x2dbdf0\x2d89cefc13a119.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-f547dd21\x2d4ba2\x2d4f6b\x2dbdf0\x2d89cefc13a119.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-93f0c5c3\x2d9f22\x2d4b93\x2da925\x2df621ed5e18e7-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/93f0c5c3-9f22-4b93-a925-f621ed5e18e7/volumes/kubernetes.io~secret/metrics-certs. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-93f0c5c3\x2d9f22\x2d4b93\x2da925\x2df621ed5e18e7-volumes-kubernetes.io\x7esecret-metrics\x2dcerts.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3e3e7655\x2d5c60\x2d4995\x2d9a23\x2db32843026a6e-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/3e3e7655-5c60-4995-9a23-b32843026a6e/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3e3e7655\x2d5c60\x2d4995\x2d9a23\x2db32843026a6e-volumes-kubernetes.io\x7esecret-node\x2dexporter\x2dkube\x2drbac\x2dproxy\x2dconfig.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-07f577410927212ed2749dad2143baef8336cee380444696241ffb3477ded814-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/07f577410927212ed2749dad2143baef8336cee380444696241ffb3477ded814/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-07f577410927212ed2749dad2143baef8336cee380444696241ffb3477ded814-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7442d3041d20149841bd69f497f0379892affe34a102098f24e5f0ec4fcd8c75-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7442d3041d20149841bd69f497f0379892affe34a102098f24e5f0ec4fcd8c75/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7442d3041d20149841bd69f497f0379892affe34a102098f24e5f0ec4fcd8c75-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-2ca3da91\x2dc233\x2d4b87\x2d902c\x2d88de41d8c9db.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/2ca3da91-c233-4b87-902c-88de41d8c9db. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-2ca3da91\x2dc233\x2d4b87\x2d902c\x2d88de41d8c9db.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0976617f\x2d18ed\x2d4a73\x2da7d8\x2dac54cf69ab93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr6xs2.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/0976617f-18ed-4a73-a7d8-ac54cf69ab93/volumes/kubernetes.io~projected/kube-api-access-r6xs2. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0976617f\x2d18ed\x2d4a73\x2da7d8\x2dac54cf69ab93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr6xs2.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753-userdata-shm.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-68c19172\x2dc7c8\x2d4a6b\x2d880c\x2de79152a16a50.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/68c19172-c7c8-4a6b-880c-e79152a16a50. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-68c19172\x2dc7c8\x2d4a6b\x2d880c\x2de79152a16a50.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-032b2b6e\x2de8bd\x2d477c\x2dbe7b\x2d99b07a9ca111.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/032b2b6e-e8bd-477c-be7b-99b07a9ca111. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-032b2b6e\x2de8bd\x2d477c\x2dbe7b\x2d99b07a9ca111.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: etc.mount: Mount process exited, code=exited status=32 Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Failed unmounting /etc. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-1.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/1. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-1.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-4683a5dd\x2d6f28\x2d4f95\x2db6df\x2d7f103be2d0f8.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-4683a5dd\x2d6f28\x2d4f95\x2db6df\x2d7f103be2d0f8.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-68c19172\x2dc7c8\x2d4a6b\x2d880c\x2de79152a16a50.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/68c19172-c7c8-4a6b-880c-e79152a16a50. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-68c19172\x2dc7c8\x2d4a6b\x2d880c\x2de79152a16a50.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-4173af40\x2d9a8f\x2d40dd\x2d9e34\x2d587a95d5903e.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/4173af40-9a8f-40dd-9e34-587a95d5903e. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-4173af40\x2d9a8f\x2d40dd\x2d9e34\x2d587a95d5903e.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3e3e7655\x2d5c60\x2d4995\x2d9a23\x2db32843026a6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2x89.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/3e3e7655-5c60-4995-9a23-b32843026a6e/volumes/kubernetes.io~projected/kube-api-access-p2x89. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-3e3e7655\x2d5c60\x2d4995\x2d9a23\x2db32843026a6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2x89.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-4383980e\x2dee96\x2d45fd\x2d8b0e\x2d55d1e1a5408f.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/4383980e-ee96-45fd-8b0e-55d1e1a5408f. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-4383980e\x2dee96\x2d45fd\x2d8b0e\x2d55d1e1a5408f.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-f547dd21\x2d4ba2\x2d4f6b\x2dbdf0\x2d89cefc13a119.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-f547dd21\x2d4ba2\x2d4f6b\x2dbdf0\x2d89cefc13a119.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-68ec7f1aae1140517d5d7d6a0cdba38159c6f0f7496f602d42b1dfb5e88db417-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/68ec7f1aae1140517d5d7d6a0cdba38159c6f0f7496f602d42b1dfb5e88db417/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-68ec7f1aae1140517d5d7d6a0cdba38159c6f0f7496f602d42b1dfb5e88db417-merged.mount: Consumed 6ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-652c23fc\x2d94b7\x2d440c\x2d89cf\x2dbc2999359623.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/652c23fc-94b7-440c-89cf-bc2999359623. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-652c23fc\x2d94b7\x2d440c\x2d89cf\x2dbc2999359623.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-92a1c1788a499b998b31f0f526f176eb2e52ecce75221c71b4c074b348e5a677-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/92a1c1788a499b998b31f0f526f176eb2e52ecce75221c71b4c074b348e5a677/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-92a1c1788a499b998b31f0f526f176eb2e52ecce75221c71b4c074b348e5a677-merged.mount: Consumed 6ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ff7777c7\x2da1dc\x2d413e\x2d8da1\x2dc4ba07527037-volumes-kubernetes.io\x7esecret-proxy\x2dtls.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/ff7777c7-a1dc-413e-8da1-c4ba07527037/volumes/kubernetes.io~secret/proxy-tls. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ff7777c7\x2da1dc\x2d413e\x2d8da1\x2dc4ba07527037-volumes-kubernetes.io\x7esecret-proxy\x2dtls.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-46cf33e4\x2dfc3b\x2d4f7a\x2db0ab\x2ddc2cbc5a5e77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbs8fq.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77/volumes/kubernetes.io~projected/kube-api-access-bs8fq. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-46cf33e4\x2dfc3b\x2d4f7a\x2db0ab\x2ddc2cbc5a5e77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbs8fq.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-70ad915a829732075b36bb02674c21d0ea63d492660ea0f38374a6f1400102c0-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/70ad915a829732075b36bb02674c21d0ea63d492660ea0f38374a6f1400102c0/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-70ad915a829732075b36bb02674c21d0ea63d492660ea0f38374a6f1400102c0-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9eb4a126\x2d482c\x2d4458\x2db901\x2de2e7a15dfd93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db4fbl.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/9eb4a126-482c-4458-b901-e2e7a15dfd93/volumes/kubernetes.io~projected/kube-api-access-b4fbl. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-9eb4a126\x2d482c\x2d4458\x2db901\x2de2e7a15dfd93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db4fbl.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: tmp.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted Temporary Directory (/tmp). Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: tmp.mount: Consumed 6ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-4d082a50\x2d4c8a\x2d4970\x2dbade\x2d95bf44983bd3.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/4d082a50-4c8a-4970-bade-95bf44983bd3. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-4d082a50\x2d4c8a\x2d4970\x2dbade\x2d95bf44983bd3.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ff7777c7\x2da1dc\x2d413e\x2d8da1\x2dc4ba07527037-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dscnpz.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/ff7777c7-a1dc-413e-8da1-c4ba07527037/volumes/kubernetes.io~projected/kube-api-access-scnpz. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ff7777c7\x2da1dc\x2d413e\x2d8da1\x2dc4ba07527037-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dscnpz.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-4683a5dd\x2d6f28\x2d4f95\x2db6df\x2d7f103be2d0f8.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-4683a5dd\x2d6f28\x2d4f95\x2db6df\x2d7f103be2d0f8.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4bfac8b1af556bdac8bbaf06ff68c34fe8940661b6fa1ae20110aa34bef10e0c-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/4bfac8b1af556bdac8bbaf06ff68c34fe8940661b6fa1ae20110aa34bef10e0c/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4bfac8b1af556bdac8bbaf06ff68c34fe8940661b6fa1ae20110aa34bef10e0c-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e00be3053de2076113871a96fc08c21607c9d30685a883b9e7c21bd67f1dd6af-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/e00be3053de2076113871a96fc08c21607c9d30685a883b9e7c21bd67f1dd6af/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e00be3053de2076113871a96fc08c21607c9d30685a883b9e7c21bd67f1dd6af-merged.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8c0d70c1a4c567b783f3294918316e2610fa4bb6320bba5ebd9a7e16a726bb74-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/8c0d70c1a4c567b783f3294918316e2610fa4bb6320bba5ebd9a7e16a726bb74/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8c0d70c1a4c567b783f3294918316e2610fa4bb6320bba5ebd9a7e16a726bb74-merged.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948-userdata-shm.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/containers/storage/overlay-containers/5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948/userdata/shm. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948-userdata-shm.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-032b2b6e\x2de8bd\x2d477c\x2dbe7b\x2d99b07a9ca111.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/032b2b6e-e8bd-477c-be7b-99b07a9ca111. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-032b2b6e\x2de8bd\x2d477c\x2dbe7b\x2d99b07a9ca111.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-878b4be7\x2d9dda\x2d4a34\x2da051\x2d3b53bc09a6dc.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/878b4be7-9dda-4a34-a051-3b53bc09a6dc. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-878b4be7\x2d9dda\x2d4a34\x2da051\x2d3b53bc09a6dc.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ec7265f344718951a2377c7a972d55b2a1cc9e6d71dec99ca7a7cdcc72fca39a-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/ec7265f344718951a2377c7a972d55b2a1cc9e6d71dec99ca7a7cdcc72fca39a/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ec7265f344718951a2377c7a972d55b2a1cc9e6d71dec99ca7a7cdcc72fca39a-merged.mount: Consumed 4ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-10b629ec\x2d6fd9\x2d4a7a\x2dbdf3\x2d191b484df0a5.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-utsns-10b629ec\x2d6fd9\x2d4a7a\x2dbdf3\x2d191b484df0a5.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-652c23fc\x2d94b7\x2d440c\x2d89cf\x2dbc2999359623.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/652c23fc-94b7-440c-89cf-bc2999359623. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-652c23fc\x2d94b7\x2d440c\x2d89cf\x2dbc2999359623.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-5.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/5. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-5.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e8af7f7ca7adf96b62ff7718f8ba9039469c70a8aeb376394e181fef4f0bea33-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/e8af7f7ca7adf96b62ff7718f8ba9039469c70a8aeb376394e181fef4f0bea33/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e8af7f7ca7adf96b62ff7718f8ba9039469c70a8aeb376394e181fef4f0bea33-merged.mount: Consumed 6ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-35a798577b4d33152c7fa3d26a30c0d615b6c74062327e54085b58e45f04d581-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/35a798577b4d33152c7fa3d26a30c0d615b6c74062327e54085b58e45f04d581/merged. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-35a798577b4d33152c7fa3d26a30c0d615b6c74062327e54085b58e45f04d581-merged.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-68c19172\x2dc7c8\x2d4a6b\x2d880c\x2de79152a16a50.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/68c19172-c7c8-4a6b-880c-e79152a16a50. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-68c19172\x2dc7c8\x2d4a6b\x2d880c\x2de79152a16a50.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7da00340\x2d9715\x2d48ac\x2db144\x2d4705de276bf5-volumes-kubernetes.io\x7esecret-ovn\x2dcert.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/7da00340-9715-48ac-b144-4705de276bf5/volumes/kubernetes.io~secret/ovn-cert. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7da00340\x2d9715\x2d48ac\x2db144\x2d4705de276bf5-volumes-kubernetes.io\x7esecret-ovn\x2dcert.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-4383980e\x2dee96\x2d45fd\x2d8b0e\x2d55d1e1a5408f.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/4383980e-ee96-45fd-8b0e-55d1e1a5408f. Feb 23 17:49:53 ip-10-0-136-68 kernel: device e35d890abd5d4b0 left promiscuous mode Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-netns-4383980e\x2dee96\x2d45fd\x2d8b0e\x2d55d1e1a5408f.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-032b2b6e\x2de8bd\x2d477c\x2dbe7b\x2d99b07a9ca111.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/032b2b6e-e8bd-477c-be7b-99b07a9ca111. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: run-ipcns-032b2b6e\x2de8bd\x2d477c\x2dbe7b\x2d99b07a9ca111.mount: Consumed 2ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7f25c5a9\x2db9c7\x2d4220\x2da892\x2d362cf6b33878-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d22vqh.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/7f25c5a9-b9c7-4220-a892-362cf6b33878/volumes/kubernetes.io~projected/kube-api-access-22vqh. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7f25c5a9\x2db9c7\x2d4220\x2da892\x2d362cf6b33878-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d22vqh.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0268b68d\x2d53b2\x2d454a\x2da03b\x2d37bd38d269bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqvgqb.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/0268b68d-53b2-454a-a03b-37bd38d269bc/volumes/kubernetes.io~projected/kube-api-access-qvgqb. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-0268b68d\x2d53b2\x2d454a\x2da03b\x2d37bd38d269bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqvgqb.mount: Consumed 3ms CPU time Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f9245b10e0b7c5bb3b3dddfd21b6e04c5b888405df5b72bd15c45c5bcbdd5c79-merged.mount: Succeeded. Feb 23 17:49:53 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/f9245b10e0b7c5bb3b3dddfd21b6e04c5b888405df5b72bd15c45c5bcbdd5c79/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f9245b10e0b7c5bb3b3dddfd21b6e04c5b888405df5b72bd15c45c5bcbdd5c79-merged.mount: Consumed 3ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7da00340\x2d9715\x2d48ac\x2db144\x2d4705de276bf5-volumes-kubernetes.io\x7esecret-ovn\x2dnode\x2dmetrics\x2dcert.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/7da00340-9715-48ac-b144-4705de276bf5/volumes/kubernetes.io~secret/ovn-node-metrics-cert. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7da00340\x2d9715\x2d48ac\x2db144\x2d4705de276bf5-volumes-kubernetes.io\x7esecret-ovn\x2dnode\x2dmetrics\x2dcert.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-51964eb18e6022941b975967a69f22726a338f987432578e85470ce3bbd8520c-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/51964eb18e6022941b975967a69f22726a338f987432578e85470ce3bbd8520c/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-51964eb18e6022941b975967a69f22726a338f987432578e85470ce3bbd8520c-merged.mount: Consumed 6ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-082cb0c7fea0c6d0cc826b0bf27632416e177d08cafd122e16754f87bbdaf497-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/082cb0c7fea0c6d0cc826b0bf27632416e177d08cafd122e16754f87bbdaf497/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-082cb0c7fea0c6d0cc826b0bf27632416e177d08cafd122e16754f87bbdaf497-merged.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-652c23fc\x2d94b7\x2d440c\x2d89cf\x2dbc2999359623.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/652c23fc-94b7-440c-89cf-bc2999359623. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-652c23fc\x2d94b7\x2d440c\x2d89cf\x2dbc2999359623.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e64acdcf723c5cfdef5178991b4a986acaafa6c7cad791601039d9ff2e098319-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/e64acdcf723c5cfdef5178991b4a986acaafa6c7cad791601039d9ff2e098319/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e64acdcf723c5cfdef5178991b4a986acaafa6c7cad791601039d9ff2e098319-merged.mount: Consumed 5ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ff7777c7\x2da1dc\x2d413e\x2d8da1\x2dc4ba07527037-volumes-kubernetes.io\x7esecret-cookie\x2dsecret.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/ff7777c7-a1dc-413e-8da1-c4ba07527037/volumes/kubernetes.io~secret/cookie-secret. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-ff7777c7\x2da1dc\x2d413e\x2d8da1\x2dc4ba07527037-volumes-kubernetes.io\x7esecret-cookie\x2dsecret.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7da00340\x2d9715\x2d48ac\x2db144\x2d4705de276bf5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp9564.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/7da00340-9715-48ac-b144-4705de276bf5/volumes/kubernetes.io~projected/kube-api-access-p9564. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-7da00340\x2d9715\x2d48ac\x2db144\x2d4705de276bf5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp9564.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-11b370a4897f403b2c91efb4db4171bc1e24e229865f7db2274785ed667e41c5-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/11b370a4897f403b2c91efb4db4171bc1e24e229865f7db2274785ed667e41c5/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-11b370a4897f403b2c91efb4db4171bc1e24e229865f7db2274785ed667e41c5-merged.mount: Consumed 5ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4f5255453886cb376007a1a813df04b2cb4a0db60f186bfd9be387a4ccaaeb20-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/4f5255453886cb376007a1a813df04b2cb4a0db60f186bfd9be387a4ccaaeb20/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-4f5255453886cb376007a1a813df04b2cb4a0db60f186bfd9be387a4ccaaeb20-merged.mount: Consumed 6ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-4683a5dd\x2d6f28\x2d4f95\x2db6df\x2d7f103be2d0f8.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-4683a5dd\x2d6f28\x2d4f95\x2db6df\x2d7f103be2d0f8.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ef10a721dc4af6cb8bf5d214a02f55da275e74b6cff1f125bb09c3ca7ecdb8bf-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/ef10a721dc4af6cb8bf5d214a02f55da275e74b6cff1f125bb09c3ca7ecdb8bf/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ef10a721dc4af6cb8bf5d214a02f55da275e74b6cff1f125bb09c3ca7ecdb8bf-merged.mount: Consumed 5ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-314c032c9a03088cca1e4c8e791e392952dad34af8aeb20f3f7a7cae615408d0-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/314c032c9a03088cca1e4c8e791e392952dad34af8aeb20f3f7a7cae615408d0/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-314c032c9a03088cca1e4c8e791e392952dad34af8aeb20f3f7a7cae615408d0-merged.mount: Consumed 3ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-4173af40\x2d9a8f\x2d40dd\x2d9e34\x2d587a95d5903e.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/4173af40-9a8f-40dd-9e34-587a95d5903e. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-4173af40\x2d9a8f\x2d40dd\x2d9e34\x2d587a95d5903e.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-2ca3da91\x2dc233\x2d4b87\x2d902c\x2d88de41d8c9db.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/2ca3da91-c233-4b87-902c-88de41d8c9db. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-2ca3da91\x2dc233\x2d4b87\x2d902c\x2d88de41d8c9db.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f7de530ab1cbc8bd6261697fc3ad7764454b1b28bbe5b65aecbcba3461b1dcd9-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/f7de530ab1cbc8bd6261697fc3ad7764454b1b28bbe5b65aecbcba3461b1dcd9/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f7de530ab1cbc8bd6261697fc3ad7764454b1b28bbe5b65aecbcba3461b1dcd9-merged.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtntbd.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volumes/kubernetes.io~projected/kube-api-access-tntbd. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtntbd.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-10b629ec\x2d6fd9\x2d4a7a\x2dbdf3\x2d191b484df0a5.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-10b629ec\x2d6fd9\x2d4a7a\x2dbdf3\x2d191b484df0a5.mount: Consumed 3ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-13348bbd4163d6d47dd340a767cb6054ce6e53f74e432c41d36d831c6128365a-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/13348bbd4163d6d47dd340a767cb6054ce6e53f74e432c41d36d831c6128365a/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-13348bbd4163d6d47dd340a767cb6054ce6e53f74e432c41d36d831c6128365a-merged.mount: Consumed 3ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1134f8bb68d50a8677494fa5e41eaf9ad1ebdcb94153d0230cf5b75a8a67cd3b-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/1134f8bb68d50a8677494fa5e41eaf9ad1ebdcb94153d0230cf5b75a8a67cd3b/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1134f8bb68d50a8677494fa5e41eaf9ad1ebdcb94153d0230cf5b75a8a67cd3b-merged.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-f547dd21\x2d4ba2\x2d4f6b\x2dbdf0\x2d89cefc13a119.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-f547dd21\x2d4ba2\x2d4f6b\x2dbdf0\x2d89cefc13a119.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-69dd2878\x2df624\x2d4440\x2d97e8\x2d7ece7e4437b1.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/69dd2878-f624-4440-97e8-7ece7e4437b1. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-69dd2878\x2df624\x2d4440\x2d97e8\x2d7ece7e4437b1.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-155ce189bbf3f0b6f9b74c22404f2cf7b99dd8c488c5b73dd442b30c9b4d055c-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/155ce189bbf3f0b6f9b74c22404f2cf7b99dd8c488c5b73dd442b30c9b4d055c/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-155ce189bbf3f0b6f9b74c22404f2cf7b99dd8c488c5b73dd442b30c9b4d055c-merged.mount: Consumed 6ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f4dbeb9b22c8b9aeac27867e75136d58f9fad4b07333585b2f80e945be857ef4-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/f4dbeb9b22c8b9aeac27867e75136d58f9fad4b07333585b2f80e945be857ef4/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f4dbeb9b22c8b9aeac27867e75136d58f9fad4b07333585b2f80e945be857ef4-merged.mount: Consumed 7ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-67115877\x2d1e45\x2d4be1\x2dab56\x2ddfcafa2c613e.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/67115877-1e45-4be1-ab56-dfcafa2c613e. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-67115877\x2d1e45\x2d4be1\x2dab56\x2ddfcafa2c613e.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e0aa36fed1afa7be489c71ab5015e989ce0777ecb5588a4a0c48a3c52cbe5e1c-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/e0aa36fed1afa7be489c71ab5015e989ce0777ecb5588a4a0c48a3c52cbe5e1c/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e0aa36fed1afa7be489c71ab5015e989ce0777ecb5588a4a0c48a3c52cbe5e1c-merged.mount: Consumed 6ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-032748921edd35d4e60c50f02120c6c0cef605d91ac2c90181c8fcdee1ab5bf4-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/032748921edd35d4e60c50f02120c6c0cef605d91ac2c90181c8fcdee1ab5bf4/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-032748921edd35d4e60c50f02120c6c0cef605d91ac2c90181c8fcdee1ab5bf4-merged.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e0abac93\x2d3e79\x2d4a32\x2d8375\x2d5ef1a2e59687-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt77mc.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/e0abac93-3e79-4a32-8375-5ef1a2e59687/volumes/kubernetes.io~projected/kube-api-access-t77mc. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-e0abac93\x2d3e79\x2d4a32\x2d8375\x2d5ef1a2e59687-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt77mc.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7e0d6454bb22460756b397db5fe9879fb86db21b4abf902058188eb118f32ed3-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/7e0d6454bb22460756b397db5fe9879fb86db21b4abf902058188eb118f32ed3/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7e0d6454bb22460756b397db5fe9879fb86db21b4abf902058188eb118f32ed3-merged.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-4d082a50\x2d4c8a\x2d4970\x2dbade\x2d95bf44983bd3.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/4d082a50-4c8a-4970-bade-95bf44983bd3. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-4d082a50\x2d4c8a\x2d4970\x2dbade\x2d95bf44983bd3.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-757b7544\x2dc265\x2d49ce\x2da1f0\x2d22cca4bf919f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4z9qm.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/757b7544-c265-49ce-a1f0-22cca4bf919f/volumes/kubernetes.io~projected/kube-api-access-4z9qm. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-757b7544\x2dc265\x2d49ce\x2da1f0\x2d22cca4bf919f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4z9qm.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3cfdf0cde1f31aa89f98706599fc3d0b6658b6ad3a92ee0b72a0c6948e31bbcb-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/3cfdf0cde1f31aa89f98706599fc3d0b6658b6ad3a92ee0b72a0c6948e31bbcb/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3cfdf0cde1f31aa89f98706599fc3d0b6658b6ad3a92ee0b72a0c6948e31bbcb-merged.mount: Consumed 6ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-4.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/4. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-a5ccef55\x2d3f5c\x2d4ffc\x2d82f9\x2d586324e62a37-volume\x2dsubpaths-etc-tuned-4.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c517f388a13f54195edcc58e570413397d849c1bc7ce5854e53248ad91f4a2c7-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/c517f388a13f54195edcc58e570413397d849c1bc7ce5854e53248ad91f4a2c7/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c517f388a13f54195edcc58e570413397d849c1bc7ce5854e53248ad91f4a2c7-merged.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-69dd2878\x2df624\x2d4440\x2d97e8\x2d7ece7e4437b1.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/69dd2878-f624-4440-97e8-7ece7e4437b1. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-69dd2878\x2df624\x2d4440\x2d97e8\x2d7ece7e4437b1.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-eca24fddaa1156fd3c6922c8c6070a3affe73a52e0ac62860c77cba51fee37f9-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/eca24fddaa1156fd3c6922c8c6070a3affe73a52e0ac62860c77cba51fee37f9/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-eca24fddaa1156fd3c6922c8c6070a3affe73a52e0ac62860c77cba51fee37f9-merged.mount: Consumed 3ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-adcfa5f5\x2d1c6b\x2d415e\x2d8e69\x2db72e137820e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkf689.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/kubelet/pods/adcfa5f5-1c6b-415e-8e69-b72e137820e1/volumes/kubernetes.io~projected/kube-api-access-kf689. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-adcfa5f5\x2d1c6b\x2d415e\x2d8e69\x2db72e137820e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkf689.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-878b4be7\x2d9dda\x2d4a34\x2da051\x2d3b53bc09a6dc.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/netns/878b4be7-9dda-4a34-a051-3b53bc09a6dc. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-netns-878b4be7\x2d9dda\x2d4a34\x2da051\x2d3b53bc09a6dc.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-878b4be7\x2d9dda\x2d4a34\x2da051\x2d3b53bc09a6dc.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/878b4be7-9dda-4a34-a051-3b53bc09a6dc. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-878b4be7\x2d9dda\x2d4a34\x2da051\x2d3b53bc09a6dc.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-4d082a50\x2d4c8a\x2d4970\x2dbade\x2d95bf44983bd3.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/4d082a50-4c8a-4970-bade-95bf44983bd3. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-4d082a50\x2d4c8a\x2d4970\x2dbade\x2d95bf44983bd3.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-13c79e72eecd069f9d9612b393f109eedd52ca47a714d3a46a3e5c4710a975d4-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/13c79e72eecd069f9d9612b393f109eedd52ca47a714d3a46a3e5c4710a975d4/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-13c79e72eecd069f9d9612b393f109eedd52ca47a714d3a46a3e5c4710a975d4-merged.mount: Consumed 6ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-67115877\x2d1e45\x2d4be1\x2dab56\x2ddfcafa2c613e.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/67115877-1e45-4be1-ab56-dfcafa2c613e. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-67115877\x2d1e45\x2d4be1\x2dab56\x2ddfcafa2c613e.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-b7e04edc\x2d986e\x2d48bf\x2d8822\x2d18763de96831.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/ipcns/b7e04edc-986e-48bf-8822-18763de96831. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-ipcns-b7e04edc\x2d986e\x2d48bf\x2d8822\x2d18763de96831.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-69dd2878\x2df624\x2d4440\x2d97e8\x2d7ece7e4437b1.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /run/utsns/69dd2878-f624-4440-97e8-7ece7e4437b1. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: run-utsns-69dd2878\x2df624\x2d4440\x2d97e8\x2d7ece7e4437b1.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-17467ed7ee1b229ab5ad4401a84eb29dd5bf65ec449196ff000cdd5797aa0c34-merged.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay/17467ed7ee1b229ab5ad4401a84eb29dd5bf65ec449196ff000cdd5797aa0c34/merged. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-17467ed7ee1b229ab5ad4401a84eb29dd5bf65ec449196ff000cdd5797aa0c34-merged.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f.service: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Stopped File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f.service: Consumed 0 CPU time Feb 23 17:49:54 ip-10-0-136-68 kernel: audit: type=1130 audit(1677174594.166:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-54e5ab65\x2dff73\x2d4a26\x2d8c44\x2d2a9765abf45f comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Removed slice system-systemd\x2dfsck.slice. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: system-systemd\x2dfsck.slice: Consumed 12ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Stopped target Swap. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounting /var/lib/containers/storage/overlay... Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted /var/lib/containers/storage/overlay. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay.mount: Consumed 2ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounting /var... Feb 23 17:49:54 ip-10-0-136-68 umount[148928]: umount: /var: target is busy. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: var.mount: Mount process exited, code=exited status=32 Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Failed unmounting /var. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounting sysroot-ostree-deploy-rhcos-var.mount... Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounting sysroot.mount... Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: sysroot.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted sysroot.mount. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: sysroot.mount: Consumed 1ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: sysroot-ostree-deploy-rhcos-var.mount: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Unmounted sysroot-ostree-deploy-rhcos-var.mount. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: sysroot-ostree-deploy-rhcos-var.mount: Consumed 1ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Reached target Unmount All Filesystems. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Stopped target Local File Systems (Pre). Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Stopping Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-setup-dev.service: Consumed 0 CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: systemd-sysusers.service: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Stopped Create System Users. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: systemd-sysusers.service: Consumed 0 CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: lvm2-monitor.service: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Stopped Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: lvm2-monitor.service: Consumed 14ms CPU time Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Reached target Shutdown. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Reached target Final Step. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: systemd-reboot.service: Succeeded. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Started Reboot. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Reached target Reboot. Feb 23 17:49:54 ip-10-0-136-68 systemd[1]: Shutting down. Feb 23 17:49:54 ip-10-0-136-68 systemd-shutdown[1]: Syncing filesystems and block devices. Feb 23 17:49:54 ip-10-0-136-68 systemd-journald[755]: Journal stopped -- Boot 7e69ac5f095f4a9bb24c6e8366d55bca -- Feb 23 17:50:06 localhost kernel: Linux version 5.14.0-266.rt14.266.el9.x86_64 (mockbuild@x86-06.stream.rdu2.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_RT Wed Feb 15 03:31:56 UTC 2023 Feb 23 17:50:06 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Feb 23 17:50:06 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/vmlinuz-5.14.0-266.rt14.266.el9.x86_64 ostree=/ostree/boot.0/rhcos/368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller=1 Feb 23 17:50:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 23 17:50:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 23 17:50:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 23 17:50:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 23 17:50:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 23 17:50:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 23 17:50:06 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 23 17:50:06 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 23 17:50:06 localhost kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 23 17:50:06 localhost kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 23 17:50:06 localhost kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 23 17:50:06 localhost kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Feb 23 17:50:06 localhost kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Feb 23 17:50:06 localhost kernel: signal: max sigframe size: 3632 Feb 23 17:50:06 localhost kernel: BIOS-provided physical RAM map: Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffe8fff] usable Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x00000000bffe9000-0x00000000bfffffff] reserved Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000042effffff] usable Feb 23 17:50:06 localhost kernel: BIOS-e820: [mem 0x000000042f000000-0x000000043fffffff] reserved Feb 23 17:50:06 localhost kernel: NX (Execute Disable) protection: active Feb 23 17:50:06 localhost kernel: SMBIOS 2.7 present. Feb 23 17:50:06 localhost kernel: DMI: Amazon EC2 m6i.xlarge/, BIOS 1.0 10/16/2017 Feb 23 17:50:06 localhost kernel: Hypervisor detected: KVM Feb 23 17:50:06 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 23 17:50:06 localhost kernel: kvm-clock: using sched offset of 7344805562 cycles Feb 23 17:50:06 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 23 17:50:06 localhost kernel: tsc: Detected 2899.998 MHz processor Feb 23 17:50:06 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 23 17:50:06 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 23 17:50:06 localhost kernel: last_pfn = 0x42f000 max_arch_pfn = 0x400000000 Feb 23 17:50:06 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 23 17:50:06 localhost kernel: last_pfn = 0xbffe9 max_arch_pfn = 0x400000000 Feb 23 17:50:06 localhost kernel: Using GB pages for direct mapping Feb 23 17:50:06 localhost kernel: RAMDISK: [mem 0x2e442000-0x33218fff] Feb 23 17:50:06 localhost kernel: ACPI: Early table checksum verification disabled Feb 23 17:50:06 localhost kernel: ACPI: RSDP 0x00000000000F8F00 000014 (v00 AMAZON) Feb 23 17:50:06 localhost kernel: ACPI: RSDT 0x00000000BFFEE180 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: WAET 0x00000000BFFEFFC0 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: SLIT 0x00000000BFFEFF40 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: APIC 0x00000000BFFEFE80 000086 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: SRAT 0x00000000BFFEFDC0 0000C0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: FACP 0x00000000BFFEFC80 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: DSDT 0x00000000BFFEEAC0 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: FACS 0x00000000000F8EC0 000040 Feb 23 17:50:06 localhost kernel: ACPI: HPET 0x00000000BFFEFC40 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: SSDT 0x00000000BFFEE280 00081F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: SSDT 0x00000000BFFEE200 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 23 17:50:06 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffeffc0-0xbffeffe7] Feb 23 17:50:06 localhost kernel: ACPI: Reserving SLIT table memory at [mem 0xbffeff40-0xbffeffab] Feb 23 17:50:06 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffefe80-0xbffeff05] Feb 23 17:50:06 localhost kernel: ACPI: Reserving SRAT table memory at [mem 0xbffefdc0-0xbffefe7f] Feb 23 17:50:06 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffefc80-0xbffefd93] Feb 23 17:50:06 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffeeac0-0xbffefc19] Feb 23 17:50:06 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xf8ec0-0xf8eff] Feb 23 17:50:06 localhost kernel: ACPI: Reserving HPET table memory at [mem 0xbffefc40-0xbffefc77] Feb 23 17:50:06 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0xbffee280-0xbffeea9e] Feb 23 17:50:06 localhost kernel: ACPI: Reserving SSDT table memory at [mem 0xbffee200-0xbffee27e] Feb 23 17:50:06 localhost kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 23 17:50:06 localhost kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 23 17:50:06 localhost kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 23 17:50:06 localhost kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 23 17:50:06 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0xbfffffff] Feb 23 17:50:06 localhost kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x43fffffff] Feb 23 17:50:06 localhost kernel: NUMA: Initialized distance table, cnt=1 Feb 23 17:50:06 localhost kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x42effffff] -> [mem 0x00000000-0x42effffff] Feb 23 17:50:06 localhost kernel: NODE_DATA(0) allocated [mem 0x42efd4000-0x42effefff] Feb 23 17:50:06 localhost kernel: Zone ranges: Feb 23 17:50:06 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 23 17:50:06 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 23 17:50:06 localhost kernel: Normal [mem 0x0000000100000000-0x000000042effffff] Feb 23 17:50:06 localhost kernel: Device empty Feb 23 17:50:06 localhost kernel: Movable zone start for each node Feb 23 17:50:06 localhost kernel: Early memory node ranges Feb 23 17:50:06 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 23 17:50:06 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffe8fff] Feb 23 17:50:06 localhost kernel: node 0: [mem 0x0000000100000000-0x000000042effffff] Feb 23 17:50:06 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000042effffff] Feb 23 17:50:06 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 23 17:50:06 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 23 17:50:06 localhost kernel: On node 0, zone Normal: 23 pages in unavailable ranges Feb 23 17:50:06 localhost kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 23 17:50:06 localhost kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 23 17:50:06 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 23 17:50:06 localhost kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 23 17:50:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 23 17:50:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 23 17:50:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 23 17:50:06 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 23 17:50:06 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 23 17:50:06 localhost kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 23 17:50:06 localhost kernel: TSC deadline timer available Feb 23 17:50:06 localhost kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffe9000-0xbfffffff] Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xdfffffff] Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xe0000000-0xe03fffff] Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xe0400000-0xfffbffff] Feb 23 17:50:06 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Feb 23 17:50:06 localhost kernel: [mem 0xc0000000-0xdfffffff] available for PCI devices Feb 23 17:50:06 localhost kernel: Booting paravirtualized kernel on KVM Feb 23 17:50:06 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 23 17:50:06 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 23 17:50:06 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u524288 Feb 23 17:50:06 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u524288 alloc=1*2097152 Feb 23 17:50:06 localhost kernel: pcpu-alloc: [0] 0 1 2 3 Feb 23 17:50:06 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Feb 23 17:50:06 localhost kernel: Fallback order for Node 0: 0 Feb 23 17:50:06 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4059945 Feb 23 17:50:06 localhost kernel: Policy zone: Normal Feb 23 17:50:06 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/vmlinuz-5.14.0-266.rt14.266.el9.x86_64 ostree=/ostree/boot.0/rhcos/368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller=1 Feb 23 17:50:06 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/vmlinuz-5.14.0-266.rt14.266.el9.x86_64 ostree=/ostree/boot.0/rhcos/368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/0 boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f", will be passed to user space. Feb 23 17:50:06 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 23 17:50:06 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 23 17:50:06 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 23 17:50:06 localhost kernel: software IO TLB: area num 4. Feb 23 17:50:06 localhost kernel: Memory: 3130180K/16498204K available (14342K kernel code, 5613K rwdata, 10060K rodata, 2784K init, 5432K bss, 477568K reserved, 0K cma-reserved) Feb 23 17:50:06 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x20/0x2b0 with crng_init=0 Feb 23 17:50:06 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 23 17:50:06 localhost kernel: ftrace: allocating 44349 entries in 174 pages Feb 23 17:50:06 localhost kernel: ftrace: allocated 174 pages with 5 groups Feb 23 17:50:06 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Feb 23 17:50:06 localhost kernel: rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4. Feb 23 17:50:06 localhost kernel: rcu: RCU priority boosting: priority 1 delay 500 ms. Feb 23 17:50:06 localhost kernel: rcu: RCU_SOFTIRQ processing moved to rcuc kthreads. Feb 23 17:50:06 localhost kernel: No expedited grace period (rcu_normal_after_boot). Feb 23 17:50:06 localhost kernel: Trampoline variant of Tasks RCU enabled. Feb 23 17:50:06 localhost kernel: Rude variant of Tasks RCU enabled. Feb 23 17:50:06 localhost kernel: Tracing variant of Tasks RCU enabled. Feb 23 17:50:06 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 23 17:50:06 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 23 17:50:06 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16 Feb 23 17:50:06 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 23 17:50:06 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Feb 23 17:50:06 localhost kernel: random: crng init done (trusting CPU's manufacturer) Feb 23 17:50:06 localhost kernel: Console: colour VGA+ 80x25 Feb 23 17:50:06 localhost kernel: printk: console [tty0] enabled Feb 23 17:50:06 localhost kernel: printk: console [ttyS0] enabled Feb 23 17:50:06 localhost kernel: ACPI: Core revision 20211217 Feb 23 17:50:06 localhost kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 23 17:50:06 localhost kernel: APIC: Switch to symmetric I/O mode setup Feb 23 17:50:06 localhost kernel: x2apic enabled Feb 23 17:50:06 localhost kernel: Switched APIC routing to physical x2apic. Feb 23 17:50:06 localhost kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x29cd4133323, max_idle_ns: 440795296220 ns Feb 23 17:50:06 localhost kernel: Calibrating delay loop (skipped) preset value.. 5799.99 BogoMIPS (lpj=2899998) Feb 23 17:50:06 localhost kernel: pid_max: default: 32768 minimum: 301 Feb 23 17:50:06 localhost kernel: LSM: Security Framework initializing Feb 23 17:50:06 localhost kernel: Yama: becoming mindful. Feb 23 17:50:06 localhost kernel: SELinux: Initializing. Feb 23 17:50:06 localhost kernel: LSM support for eBPF active Feb 23 17:50:06 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 23 17:50:06 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 23 17:50:06 localhost kernel: x86/tme: enabled by BIOS Feb 23 17:50:06 localhost kernel: x86/mktme: No known encryption algorithm is supported: 0x0 Feb 23 17:50:06 localhost kernel: x86/mktme: disabled by BIOS Feb 23 17:50:06 localhost kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 23 17:50:06 localhost kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 23 17:50:06 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 23 17:50:06 localhost kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 23 17:50:06 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 23 17:50:06 localhost kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 23 17:50:06 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 23 17:50:06 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 23 17:50:06 localhost kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 23 17:50:06 localhost kernel: Freeing SMP alternatives memory: 36K Feb 23 17:50:06 localhost kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1235 Feb 23 17:50:06 localhost kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz (family: 0x6, model: 0x6a, stepping: 0x6) Feb 23 17:50:06 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Feb 23 17:50:06 localhost kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Feb 23 17:50:06 localhost kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Feb 23 17:50:06 localhost kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Feb 23 17:50:06 localhost kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Feb 23 17:50:06 localhost kernel: printk: console [ttyS0] printing thread started Feb 23 17:50:06 localhost kernel: rcu: Hierarchical SRCU implementation. Feb 23 17:50:06 localhost kernel: rcu: Max phase no-delay instances is 400. Feb 23 17:50:06 localhost kernel: printk: console [tty0] printing thread started Feb 23 17:50:06 localhost kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 23 17:50:06 localhost kernel: smp: Bringing up secondary CPUs ... Feb 23 17:50:06 localhost kernel: x86: Booting SMP configuration: Feb 23 17:50:06 localhost kernel: .... node #0, CPUs: #1 #2 Feb 23 17:50:06 localhost kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 23 17:50:06 localhost kernel: #3 Feb 23 17:50:06 localhost kernel: smp: Brought up 1 node, 4 CPUs Feb 23 17:50:06 localhost kernel: smpboot: Max logical packages: 1 Feb 23 17:50:06 localhost kernel: smpboot: Total of 4 processors activated (23199.98 BogoMIPS) Feb 23 17:50:06 localhost kernel: node 0 deferred pages initialised in 17ms Feb 23 17:50:06 localhost kernel: devtmpfs: initialized Feb 23 17:50:06 localhost kernel: x86/mm: Memory block size: 128MB Feb 23 17:50:06 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 23 17:50:06 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 23 17:50:06 localhost kernel: pinctrl core: initialized pinctrl subsystem Feb 23 17:50:06 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 23 17:50:06 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Feb 23 17:50:06 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 23 17:50:06 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 23 17:50:06 localhost kernel: audit: initializing netlink subsys (disabled) Feb 23 17:50:06 localhost kernel: audit: type=2000 audit(1677174604.229:1): state=initialized audit_enabled=0 res=1 Feb 23 17:50:06 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Feb 23 17:50:06 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 23 17:50:06 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Feb 23 17:50:06 localhost kernel: cpuidle: using governor menu Feb 23 17:50:06 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Feb 23 17:50:06 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 23 17:50:06 localhost kernel: PCI: Using configuration type 1 for base access Feb 23 17:50:06 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 23 17:50:06 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Feb 23 17:50:06 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 23 17:50:06 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 23 17:50:06 localhost kernel: cryptd: max_cpu_qlen set to 1000 Feb 23 17:50:06 localhost kernel: ACPI: Added _OSI(Module Device) Feb 23 17:50:06 localhost kernel: ACPI: Added _OSI(Processor Device) Feb 23 17:50:06 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 23 17:50:06 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 23 17:50:06 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 23 17:50:06 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 23 17:50:06 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 23 17:50:06 localhost kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 23 17:50:06 localhost kernel: ACPI: Interpreter enabled Feb 23 17:50:06 localhost kernel: ACPI: PM: (supports S0 S4 S5) Feb 23 17:50:06 localhost kernel: ACPI: Using IOAPIC for interrupt routing Feb 23 17:50:06 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 23 17:50:06 localhost kernel: PCI: Using E820 reservations for host bridge windows Feb 23 17:50:06 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 23 17:50:06 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 23 17:50:06 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI EDR HPX-Type3] Feb 23 17:50:06 localhost kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 23 17:50:06 localhost kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 23 17:50:06 localhost kernel: acpiphp: Slot [3] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [4] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [5] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [6] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [7] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [8] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [9] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [10] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [11] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [12] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [13] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [14] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [15] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [16] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [17] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [18] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [19] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [20] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [21] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [22] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [23] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [24] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [25] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [26] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [27] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [28] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [29] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [30] registered Feb 23 17:50:06 localhost kernel: acpiphp: Slot [31] registered Feb 23 17:50:06 localhost kernel: PCI host bridge to bus 0000:00 Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x20043fffffff window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 23 17:50:06 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 23 17:50:06 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 23 17:50:06 localhost kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 23 17:50:06 localhost kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 23 17:50:06 localhost kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 23 17:50:06 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 23 17:50:06 localhost kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 23 17:50:06 localhost kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 23 17:50:06 localhost kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 23 17:50:06 localhost kernel: pci 0000:00:04.0: enabling Extended Tags Feb 23 17:50:06 localhost kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 23 17:50:06 localhost kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf5fff] Feb 23 17:50:06 localhost kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf6000-0xfebf7fff] Feb 23 17:50:06 localhost kernel: pci 0000:00:05.0: reg 0x18: [mem 0xfe800000-0xfe87ffff pref] Feb 23 17:50:06 localhost kernel: pci 0000:00:05.0: enabling Extended Tags Feb 23 17:50:06 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 23 17:50:06 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 23 17:50:06 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 23 17:50:06 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 23 17:50:06 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 23 17:50:06 localhost kernel: iommu: Default domain type: Translated Feb 23 17:50:06 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 23 17:50:06 localhost kernel: SCSI subsystem initialized Feb 23 17:50:06 localhost kernel: ACPI: bus type USB registered Feb 23 17:50:06 localhost kernel: usbcore: registered new interface driver usbfs Feb 23 17:50:06 localhost kernel: usbcore: registered new interface driver hub Feb 23 17:50:06 localhost kernel: usbcore: registered new device driver usb Feb 23 17:50:06 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Feb 23 17:50:06 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 23 17:50:06 localhost kernel: PTP clock support registered Feb 23 17:50:06 localhost kernel: EDAC MC: Ver: 3.0.0 Feb 23 17:50:06 localhost kernel: NetLabel: Initializing Feb 23 17:50:06 localhost kernel: NetLabel: domain hash size = 128 Feb 23 17:50:06 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Feb 23 17:50:06 localhost kernel: NetLabel: unlabeled traffic allowed by default Feb 23 17:50:06 localhost kernel: PCI: Using ACPI for IRQ routing Feb 23 17:50:06 localhost kernel: PCI: pci_cache_line_size set to 64 bytes Feb 23 17:50:06 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 23 17:50:06 localhost kernel: e820: reserve RAM buffer [mem 0xbffe9000-0xbfffffff] Feb 23 17:50:06 localhost kernel: e820: reserve RAM buffer [mem 0x42f000000-0x42fffffff] Feb 23 17:50:06 localhost kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 23 17:50:06 localhost kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 23 17:50:06 localhost kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 23 17:50:06 localhost kernel: vgaarb: loaded Feb 23 17:50:06 localhost kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 23 17:50:06 localhost kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 23 17:50:06 localhost kernel: clocksource: Switched to clocksource kvm-clock Feb 23 17:50:06 localhost kernel: VFS: Disk quotas dquot_6.6.0 Feb 23 17:50:06 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 23 17:50:06 localhost kernel: pnp: PnP ACPI init Feb 23 17:50:06 localhost kernel: pnp: PnP ACPI: found 5 devices Feb 23 17:50:06 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 23 17:50:06 localhost kernel: NET: Registered PF_INET protocol family Feb 23 17:50:06 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 23 17:50:06 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 6, 327680 bytes, linear) Feb 23 17:50:06 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 23 17:50:06 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 23 17:50:06 localhost kernel: TCP bind hash table entries: 65536 (order: 9, 2621440 bytes, linear) Feb 23 17:50:06 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Feb 23 17:50:06 localhost kernel: MPTCP token hash table entries: 16384 (order: 7, 917504 bytes, linear) Feb 23 17:50:06 localhost kernel: UDP hash table entries: 8192 (order: 7, 786432 bytes, linear) Feb 23 17:50:06 localhost kernel: UDP-Lite hash table entries: 8192 (order: 7, 786432 bytes, linear) Feb 23 17:50:06 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 23 17:50:06 localhost kernel: NET: Registered PF_XDP protocol family Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Feb 23 17:50:06 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x20043fffffff window] Feb 23 17:50:06 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 23 17:50:06 localhost kernel: PCI: CLS 0 bytes, default 64 Feb 23 17:50:06 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 23 17:50:06 localhost kernel: software IO TLB: mapped [mem 0x00000000bbfe9000-0x00000000bffe9000] (64MB) Feb 23 17:50:06 localhost kernel: ACPI: bus type thunderbolt registered Feb 23 17:50:06 localhost kernel: Trying to unpack rootfs image as initramfs... Feb 23 17:50:06 localhost kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x29cd4133323, max_idle_ns: 440795296220 ns Feb 23 17:50:06 localhost kernel: clocksource: Switched to clocksource tsc Feb 23 17:50:06 localhost kernel: Initialise system trusted keyrings Feb 23 17:50:06 localhost kernel: Key type blacklist registered Feb 23 17:50:06 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Feb 23 17:50:06 localhost kernel: zbud: loaded Feb 23 17:50:06 localhost kernel: integrity: Platform Keyring initialized Feb 23 17:50:06 localhost kernel: NET: Registered PF_ALG protocol family Feb 23 17:50:06 localhost kernel: xor: automatically using best checksumming function avx Feb 23 17:50:06 localhost kernel: Key type asymmetric registered Feb 23 17:50:06 localhost kernel: Asymmetric key parser 'x509' registered Feb 23 17:50:06 localhost kernel: Running certificate verification selftests Feb 23 17:50:06 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Feb 23 17:50:06 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Feb 23 17:50:06 localhost kernel: io scheduler mq-deadline registered Feb 23 17:50:06 localhost kernel: io scheduler kyber registered Feb 23 17:50:06 localhost kernel: io scheduler bfq registered Feb 23 17:50:06 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Feb 23 17:50:06 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Feb 23 17:50:06 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Feb 23 17:50:06 localhost kernel: ACPI: button: Power Button [PWRF] Feb 23 17:50:06 localhost kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1 Feb 23 17:50:06 localhost kernel: ACPI: button: Sleep Button [SLPF] Feb 23 17:50:06 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 23 17:50:06 localhost kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 23 17:50:06 localhost kernel: Non-volatile memory driver v1.3 Feb 23 17:50:06 localhost kernel: rdac: device handler registered Feb 23 17:50:06 localhost kernel: hp_sw: device handler registered Feb 23 17:50:06 localhost kernel: emc: device handler registered Feb 23 17:50:06 localhost kernel: alua: device handler registered Feb 23 17:50:06 localhost kernel: libphy: Fixed MDIO Bus: probed Feb 23 17:50:06 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 23 17:50:06 localhost kernel: ehci-pci: EHCI PCI platform driver Feb 23 17:50:06 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Feb 23 17:50:06 localhost kernel: ohci-pci: OHCI PCI platform driver Feb 23 17:50:06 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 23 17:50:06 localhost kernel: usbcore: registered new interface driver usbserial_generic Feb 23 17:50:06 localhost kernel: usbserial: USB Serial support registered for generic Feb 23 17:50:06 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 23 17:50:06 localhost kernel: i8042: Warning: Keylock active Feb 23 17:50:06 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 23 17:50:06 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 23 17:50:06 localhost kernel: mousedev: PS/2 mouse device common for all mice Feb 23 17:50:06 localhost kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 23 17:50:06 localhost kernel: rtc_cmos 00:00: registered as rtc0 Feb 23 17:50:06 localhost kernel: rtc_cmos 00:00: setting system clock to 2023-02-23T17:50:05 UTC (1677174605) Feb 23 17:50:06 localhost kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 23 17:50:06 localhost kernel: intel_pstate: P-states controlled by the platform Feb 23 17:50:06 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Feb 23 17:50:06 localhost kernel: usbcore: registered new interface driver usbhid Feb 23 17:50:06 localhost kernel: usbhid: USB HID core driver Feb 23 17:50:06 localhost kernel: drop_monitor: Initializing network drop monitor service Feb 23 17:50:06 localhost kernel: Initializing XFRM netlink socket Feb 23 17:50:06 localhost kernel: NET: Registered PF_INET6 protocol family Feb 23 17:50:06 localhost kernel: Segment Routing with IPv6 Feb 23 17:50:06 localhost kernel: NET: Registered PF_PACKET protocol family Feb 23 17:50:06 localhost kernel: mpls_gso: MPLS GSO support Feb 23 17:50:06 localhost kernel: IPI shorthand broadcast: enabled Feb 23 17:50:06 localhost kernel: AVX2 version of gcm_enc/dec engaged. Feb 23 17:50:06 localhost kernel: AES CTR mode by8 optimization enabled Feb 23 17:50:06 localhost kernel: sched_clock: Marking stable (413425513, 912743560)->(1591636488, -265467415) Feb 23 17:50:06 localhost kernel: registered taskstats version 1 Feb 23 17:50:06 localhost kernel: Loading compiled-in X.509 certificates Feb 23 17:50:06 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 54494cc1e4faf1b952d457c5809968c1c68fe198' Feb 23 17:50:06 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Feb 23 17:50:06 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Feb 23 17:50:06 localhost kernel: zswap: loaded using pool lzo/zbud Feb 23 17:50:06 localhost kernel: page_owner is disabled Feb 23 17:50:06 localhost kernel: Key type big_key registered Feb 23 17:50:06 localhost kernel: Freeing initrd memory: 79708K Feb 23 17:50:06 localhost kernel: Key type encrypted registered Feb 23 17:50:06 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Feb 23 17:50:06 localhost kernel: Loading compiled-in module X.509 certificates Feb 23 17:50:06 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 54494cc1e4faf1b952d457c5809968c1c68fe198' Feb 23 17:50:06 localhost kernel: ima: Allocated hash algorithm: sha256 Feb 23 17:50:06 localhost kernel: ima: No architecture policies found Feb 23 17:50:06 localhost kernel: evm: Initialising EVM extended attributes: Feb 23 17:50:06 localhost kernel: evm: security.selinux Feb 23 17:50:06 localhost kernel: evm: security.SMACK64 (disabled) Feb 23 17:50:06 localhost kernel: evm: security.SMACK64EXEC (disabled) Feb 23 17:50:06 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Feb 23 17:50:06 localhost kernel: evm: security.SMACK64MMAP (disabled) Feb 23 17:50:06 localhost kernel: evm: security.apparmor (disabled) Feb 23 17:50:06 localhost kernel: evm: security.ima Feb 23 17:50:06 localhost kernel: evm: security.capability Feb 23 17:50:06 localhost kernel: evm: HMAC attrs: 0x1 Feb 23 17:50:06 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input2 Feb 23 17:50:06 localhost kernel: Freeing unused decrypted memory: 2036K Feb 23 17:50:06 localhost kernel: Freeing unused kernel image (initmem) memory: 2784K Feb 23 17:50:06 localhost kernel: Write protecting the kernel read-only data: 26624k Feb 23 17:50:06 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 23 17:50:06 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 180K Feb 23 17:50:06 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Feb 23 17:50:06 localhost kernel: Run /init as init process Feb 23 17:50:06 localhost kernel: with arguments: Feb 23 17:50:06 localhost kernel: /init Feb 23 17:50:06 localhost kernel: with environment: Feb 23 17:50:06 localhost kernel: HOME=/ Feb 23 17:50:06 localhost kernel: TERM=linux Feb 23 17:50:06 localhost kernel: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/vmlinuz-5.14.0-266.rt14.266.el9.x86_64 Feb 23 17:50:06 localhost kernel: ostree=/ostree/boot.0/rhcos/368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/0 Feb 23 17:50:06 localhost kernel: boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f Feb 23 17:50:06 localhost systemd-journald[340]: Journal started Feb 23 17:50:06 localhost systemd-journald[340]: Runtime Journal (/run/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 314.6M, 306.6M free. Feb 23 17:50:06 localhost systemd-modules-load[341]: Inserted module 'fuse' Feb 23 17:50:06 localhost systemd-journald[340]: Missed 36 kernel messages Feb 23 17:50:06 localhost kernel: fuse: init (API version 7.36) Feb 23 17:50:06 localhost systemd-modules-load[341]: Module 'msr' is built in Feb 23 17:50:06 localhost systemd-sysusers[342]: Creating group 'nobody' with GID 65534. Feb 23 17:50:06 localhost systemd[1]: Finished CoreOS: Touch /run/agetty.reload. Feb 23 17:50:06 localhost systemd[1]: Finished Create List of Static Device Nodes. Feb 23 17:50:06 localhost systemd[1]: Finished Load Kernel Modules. Feb 23 17:50:06 localhost systemd[1]: Finished Setup Virtual Console. Feb 23 17:50:06 localhost systemd[1]: Starting dracut ask for additional cmdline parameters... Feb 23 17:50:06 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 17:50:06 localhost systemd-sysusers[342]: Creating group 'sgx' with GID 999. Feb 23 17:50:06 localhost systemd-sysusers[342]: Creating group 'users' with GID 100. Feb 23 17:50:06 localhost systemd-sysusers[342]: Creating group 'root' with GID 998. Feb 23 17:50:06 localhost systemd-sysusers[342]: Creating group 'dbus' with GID 81. Feb 23 17:50:06 localhost systemd-sysusers[342]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Feb 23 17:50:06 localhost systemd[1]: Finished Create System Users. Feb 23 17:50:06 localhost systemd[1]: Finished dracut ask for additional cmdline parameters. Feb 23 17:50:06 localhost systemd[1]: Finished Apply Kernel Variables. Feb 23 17:50:06 localhost systemd[1]: Starting dracut cmdline hook... Feb 23 17:50:06 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 23 17:50:06 localhost dracut-cmdline[366]: dracut-413.92.202302171914-0 dracut-057-21.git20230214.el9 Feb 23 17:50:06 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 17:50:06 localhost dracut-cmdline[366]: Using kernel command line parameters: rd.driver.pre=dm_multipath BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/vmlinuz-5.14.0-266.rt14.266.el9.x86_64 ostree=/ostree/boot.0/rhcos/368e32e4125ee712802e93c0d759a9e076516d95a7c2d319cdf7620c8d30cd10/0 ignition.platform.id=aws console=tty0 console=ttyS0,115200n8 root=UUID=c83680a9-dcc4-4413-a0a5-4681b35c650a rw rootflags=prjquota boot=UUID=54e5ab65-ff73-4a26-8c44-2a9765abf45f systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller=1 Feb 23 17:50:06 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Feb 23 17:50:06 localhost systemd[1]: Finished Create Volatile Files and Directories. Feb 23 17:50:06 localhost systemd-journald[340]: Missed 24 kernel messages Feb 23 17:50:06 localhost kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 23 17:50:06 localhost systemd[1]: Finished dracut cmdline hook. Feb 23 17:50:06 localhost systemd[1]: Starting dracut pre-udev hook... Feb 23 17:50:06 localhost systemd-journald[340]: Missed 2 kernel messages Feb 23 17:50:06 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 23 17:50:06 localhost kernel: device-mapper: uevent: version 1.0.3 Feb 23 17:50:06 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Feb 23 17:50:06 localhost systemd[1]: Finished dracut pre-udev hook. Feb 23 17:50:06 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 23 17:50:06 localhost systemd-udevd[521]: Using default interface naming scheme 'rhel-9.0'. Feb 23 17:50:06 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 23 17:50:06 localhost systemd[1]: Starting dracut pre-trigger hook... Feb 23 17:50:06 localhost dracut-pre-trigger[528]: rd.md=0: removing MD RAID activation Feb 23 17:50:06 localhost systemd[1]: Finished dracut pre-trigger hook. Feb 23 17:50:06 localhost systemd[1]: Starting Coldplug All udev Devices... Feb 23 17:50:06 localhost systemd[1]: sys-module-fuse.device: Failed to enqueue SYSTEMD_WANTS= job, ignoring: Unit sys-fs-fuse-connections.mount not found. Feb 23 17:50:06 localhost systemd[1]: Finished Coldplug All udev Devices. Feb 23 17:50:06 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Feb 23 17:50:06 localhost systemd[1]: Reached target Network. Feb 23 17:50:06 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Feb 23 17:50:06 localhost systemd[1]: Starting dracut initqueue hook... Feb 23 17:50:06 localhost systemd[1]: Starting Wait for udev To Complete Device Initialization... Feb 23 17:50:06 localhost udevadm[592]: systemd-udev-settle.service is deprecated. Please fix multipathd-configure.service not to pull it in. Feb 23 17:50:07 localhost systemd-journald[340]: Missed 16 kernel messages Feb 23 17:50:07 localhost kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 23 17:50:07 localhost kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 23 17:50:07 localhost kernel: nvme nvme0: pci function 0000:00:04.0 Feb 23 17:50:07 localhost kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 02:ea:92:f9:d3:f3 Feb 23 17:50:07 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 23 17:50:07 localhost kernel: ena 0000:00:05.0 ens5: renamed from eth0 Feb 23 17:50:07 localhost kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 23 17:50:07 localhost kernel: nvme0n1: p1 p2 p3 p4 Feb 23 17:50:07 localhost systemd[1]: Found device Amazon Elastic Block Store root. Feb 23 17:50:07 localhost systemd[1]: Found device Amazon Elastic Block Store root. Feb 23 17:50:07 localhost systemd[1]: Finished Wait for udev To Complete Device Initialization. Feb 23 17:50:07 localhost systemd[1]: Finished dracut initqueue hook. Feb 23 17:50:07 localhost systemd[1]: Reached target Initrd Root Device. Feb 23 17:50:07 localhost systemd[1]: Reached target Preparation for Remote File Systems. Feb 23 17:50:07 localhost systemd[1]: Reached target Remote Encrypted Volumes. Feb 23 17:50:07 localhost systemd[1]: Reached target Remote File Systems. Feb 23 17:50:07 localhost systemd[1]: Starting dracut pre-mount hook... Feb 23 17:50:07 localhost systemd[1]: CoreOS: Mount /sysroot (Subsequent Boot) was skipped because of an unmet condition check (ConditionKernelCommandLine=!root). Feb 23 17:50:07 localhost systemd[1]: Device-Mapper Multipath Default Configuration was skipped because of an unmet condition check (ConditionKernelCommandLine=rd.multipath=default). Feb 23 17:50:07 localhost systemd[1]: Starting Device-Mapper Multipath Device Controller... Feb 23 17:50:07 localhost systemd[1]: Finished dracut pre-mount hook. Feb 23 17:50:07 localhost multipathd[646]: --------start up-------- Feb 23 17:50:07 localhost multipathd[646]: read /etc/multipath.conf Feb 23 17:50:07 localhost multipathd[646]: /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 17:50:07 localhost multipathd[646]: You can run "/sbin/mpathconf --enable" to create Feb 23 17:50:07 localhost multipathd[646]: /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 17:50:07 localhost multipathd[646]: path checkers start up Feb 23 17:50:07 localhost multipathd[646]: /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 17:50:07 localhost multipathd[646]: You can run "/sbin/mpathconf --enable" to create Feb 23 17:50:07 localhost multipathd[646]: /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 17:50:07 localhost systemd[1]: Started Device-Mapper Multipath Device Controller. Feb 23 17:50:07 localhost systemd[1]: Reached target Preparation for Local File Systems. Feb 23 17:50:07 localhost systemd[1]: Reached target Local File Systems. Feb 23 17:50:07 localhost systemd[1]: Reached target System Initialization. Feb 23 17:50:07 localhost systemd[1]: Reached target Basic System. Feb 23 17:50:07 localhost systemd[1]: System is tainted: cgroupsv1 Feb 23 17:50:07 localhost systemd[1]: Acquire Live PXE rootfs Image was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Feb 23 17:50:07 localhost systemd[1]: Persist Osmet Files (PXE) was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Feb 23 17:50:07 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/c83680a9-dcc4-4413-a0a5-4681b35c650a... Feb 23 17:50:07 localhost systemd-fsck[662]: /usr/sbin/fsck.xfs: XFS file system. Feb 23 17:50:07 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/c83680a9-dcc4-4413-a0a5-4681b35c650a. Feb 23 17:50:07 localhost systemd[1]: Mounting /sysroot... Feb 23 17:50:07 localhost systemd-journald[340]: Missed 34 kernel messages Feb 23 17:50:07 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Feb 23 17:50:07 localhost kernel: XFS (nvme0n1p4): Mounting V5 Filesystem Feb 23 17:50:07 localhost kernel: XFS (nvme0n1p4): Ending clean mount Feb 23 17:50:07 localhost kernel: xfs filesystem being mounted at /sysroot supports timestamps until 2038 (0x7fffffff) Feb 23 17:50:07 localhost systemd[1]: Mounted /sysroot. Feb 23 17:50:07 localhost systemd[1]: Starting OSTree Prepare OS/... Feb 23 17:50:07 localhost ostree-prepare-root[677]: preparing sysroot at /sysroot Feb 23 17:50:07 localhost ostree-prepare-root[677]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab.0 Feb 23 17:50:07 localhost ostree-prepare-root[677]: filesystem at /sysroot currently writable: 1 Feb 23 17:50:07 localhost ostree-prepare-root[677]: sysroot.readonly configuration value: 1 Feb 23 17:50:07 localhost systemd-journald[340]: Missed 6 kernel messages Feb 23 17:50:07 localhost kernel: xfs filesystem being remounted at /sysroot/ostree/deploy/rhcos/deploy/b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab.0/etc supports timestamps until 2038 (0x7fffffff) Feb 23 17:50:07 localhost kernel: xfs filesystem being remounted at /sysroot/ostree/deploy/rhcos/var supports timestamps until 2038 (0x7fffffff) Feb 23 17:50:07 localhost systemd[1]: Finished OSTree Prepare OS/. Feb 23 17:50:07 localhost systemd[1]: Reached target Initrd Root File System. Feb 23 17:50:07 localhost systemd[1]: CoreOS Propagate Multipath Configuration was skipped because of an unmet condition check (ConditionKernelCommandLine=rd.multipath=default). Feb 23 17:50:07 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Feb 23 17:50:08 localhost multipathd[646]: exit (signal) Feb 23 17:50:08 localhost multipathd[646]: --------shut down------- Feb 23 17:50:08 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 23 17:50:08 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Feb 23 17:50:08 localhost systemd[1]: Reached target Initrd File Systems. Feb 23 17:50:08 localhost systemd[1]: Reached target Initrd Default Target. Feb 23 17:50:08 localhost systemd[1]: dracut mount hook was skipped because no trigger condition checks were met. Feb 23 17:50:08 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 23 17:50:08 localhost systemd[1]: multipathd.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 23 17:50:08 localhost dracut-pre-pivot[709]: 2.972697 | /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 17:50:08 localhost dracut-pre-pivot[709]: 2.972716 | You can run "/sbin/mpathconf --enable" to create Feb 23 17:50:08 localhost dracut-pre-pivot[709]: 2.972720 | /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 17:50:08 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Feb 23 17:50:08 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 23 17:50:08 localhost systemd[1]: Stopped target Network. Feb 23 17:50:08 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Feb 23 17:50:08 localhost systemd[1]: Stopped target Timer Units. Feb 23 17:50:08 localhost systemd[1]: dbus.socket: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Feb 23 17:50:08 localhost systemd[1]: Unmount Live /var if Persistent /var Is Configured was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Feb 23 17:50:08 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 23 17:50:08 localhost systemd[1]: Stopped target Initrd Default Target. Feb 23 17:50:08 localhost systemd[1]: Stopped target Basic System. Feb 23 17:50:08 localhost systemd[1]: Stopped target Subsequent (Not Ignition) boot complete. Feb 23 17:50:08 localhost systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup. Feb 23 17:50:08 localhost systemd[1]: Stopped target Initrd Root Device. Feb 23 17:50:08 localhost systemd[1]: Stopped target Initrd /usr File System. Feb 23 17:50:08 localhost systemd[1]: Stopped target Path Units. Feb 23 17:50:08 localhost systemd[1]: Stopped target Remote File Systems. Feb 23 17:50:08 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Feb 23 17:50:08 localhost systemd[1]: Stopped target Slice Units. Feb 23 17:50:08 localhost systemd[1]: Stopped target Socket Units. Feb 23 17:50:08 localhost systemd[1]: Stopped target System Initialization. Feb 23 17:50:08 localhost systemd[1]: Stopped target Local File Systems. Feb 23 17:50:08 localhost systemd[1]: Stopped target Preparation for Local File Systems. Feb 23 17:50:08 localhost systemd[1]: Stopped target Swaps. Feb 23 17:50:08 localhost systemd[1]: coreos-touch-run-agetty.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Feb 23 17:50:08 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 23 17:50:08 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 17:50:08 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 17:50:08 localhost systemd[1]: Stopped target Local Encrypted Volumes (Pre). Feb 23 17:50:08 localhost systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Feb 23 17:50:08 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped dracut initqueue hook. Feb 23 17:50:08 localhost systemd[1]: Acquire Live PXE rootfs Image was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Feb 23 17:50:08 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 17:50:08 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 17:50:08 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 17:50:08 localhost systemd[1]: systemd-udev-settle.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Wait for udev To Complete Device Initialization. Feb 23 17:50:08 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Coldplug All udev Devices. Feb 23 17:50:08 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 23 17:50:08 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 23 17:50:08 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Feb 23 17:50:08 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 23 17:50:08 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Closed udev Control Socket. Feb 23 17:50:08 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Closed udev Kernel Socket. Feb 23 17:50:08 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 23 17:50:08 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped dracut cmdline hook. Feb 23 17:50:08 localhost systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped dracut ask for additional cmdline parameters. Feb 23 17:50:08 localhost systemd[1]: Starting Cleanup udev Database... Feb 23 17:50:08 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 17:50:08 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Create List of Static Device Nodes. Feb 23 17:50:08 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Create System Users. Feb 23 17:50:08 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Stopped Setup Virtual Console. Feb 23 17:50:08 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab.0-etc.mount: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab.0.mount: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 23 17:50:08 localhost systemd[1]: Finished Cleanup udev Database. Feb 23 17:50:08 localhost systemd[1]: Reached target Switch Root. Feb 23 17:50:08 localhost systemd[1]: Starting Switch Root... Feb 23 17:50:08 localhost systemd[1]: Switching root. Feb 23 17:50:08 localhost systemd-journald[340]: Journal stopped Feb 23 17:50:09 localhost systemd[1]: Finished OSTree Prepare OS/. Feb 23 17:50:09 localhost systemd[1]: Reached target Initrd Root File System. Feb 23 17:50:09 localhost systemd[1]: CoreOS Propagate Multipath Configuration was skipped because of an unmet condition check (ConditionKernelCommandLine=rd.multipath=default). Feb 23 17:50:09 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Feb 23 17:50:09 localhost multipathd[646]: exit (signal) Feb 23 17:50:09 localhost multipathd[646]: --------shut down------- Feb 23 17:50:09 localhost systemd[1]: Stopping Device-Mapper Multipath Device Controller... Feb 23 17:50:09 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Feb 23 17:50:09 localhost systemd[1]: Reached target Initrd File Systems. Feb 23 17:50:09 localhost systemd[1]: Reached target Initrd Default Target. Feb 23 17:50:09 localhost systemd[1]: dracut mount hook was skipped because no trigger condition checks were met. Feb 23 17:50:09 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Feb 23 17:50:09 localhost systemd[1]: multipathd.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Device-Mapper Multipath Device Controller. Feb 23 17:50:09 localhost dracut-pre-pivot[709]: 2.972697 | /etc/multipath.conf does not exist, blacklisting all devices. Feb 23 17:50:09 localhost dracut-pre-pivot[709]: 2.972716 | You can run "/sbin/mpathconf --enable" to create Feb 23 17:50:09 localhost dracut-pre-pivot[709]: 2.972720 | /etc/multipath.conf. See man mpathconf(8) for more details Feb 23 17:50:09 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Feb 23 17:50:09 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Feb 23 17:50:09 localhost systemd[1]: Stopped target Network. Feb 23 17:50:09 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Feb 23 17:50:09 localhost systemd[1]: Stopped target Timer Units. Feb 23 17:50:09 localhost systemd[1]: dbus.socket: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Feb 23 17:50:09 localhost systemd[1]: Unmount Live /var if Persistent /var Is Configured was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Feb 23 17:50:09 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Feb 23 17:50:09 localhost systemd[1]: Stopped target Initrd Default Target. Feb 23 17:50:09 localhost systemd[1]: Stopped target Basic System. Feb 23 17:50:09 localhost systemd[1]: Stopped target Subsequent (Not Ignition) boot complete. Feb 23 17:50:09 localhost systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup. Feb 23 17:50:09 localhost systemd[1]: Stopped target Initrd Root Device. Feb 23 17:50:09 localhost systemd[1]: Stopped target Initrd /usr File System. Feb 23 17:50:09 localhost systemd[1]: Stopped target Path Units. Feb 23 17:50:09 localhost systemd[1]: Stopped target Remote File Systems. Feb 23 17:50:09 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Feb 23 17:50:09 localhost systemd[1]: Stopped target Slice Units. Feb 23 17:50:09 localhost systemd[1]: Stopped target Socket Units. Feb 23 17:50:09 localhost systemd[1]: Stopped target System Initialization. Feb 23 17:50:09 localhost systemd[1]: Stopped target Local File Systems. Feb 23 17:50:09 localhost systemd[1]: Stopped target Preparation for Local File Systems. Feb 23 17:50:09 localhost systemd[1]: Stopped target Swaps. Feb 23 17:50:09 localhost systemd[1]: coreos-touch-run-agetty.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped CoreOS: Touch /run/agetty.reload. Feb 23 17:50:09 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped dracut pre-mount hook. Feb 23 17:50:09 localhost systemd[1]: Stopped target Local Encrypted Volumes. Feb 23 17:50:09 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Feb 23 17:50:09 localhost systemd[1]: Stopped target Local Encrypted Volumes (Pre). Feb 23 17:50:09 localhost systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch. Feb 23 17:50:09 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped dracut initqueue hook. Feb 23 17:50:09 localhost systemd[1]: Acquire Live PXE rootfs Image was skipped because of an unmet condition check (ConditionPathExists=/run/ostree-live). Feb 23 17:50:09 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Apply Kernel Variables. Feb 23 17:50:09 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Load Kernel Modules. Feb 23 17:50:09 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Create Volatile Files and Directories. Feb 23 17:50:09 localhost systemd[1]: systemd-udev-settle.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Wait for udev To Complete Device Initialization. Feb 23 17:50:09 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Coldplug All udev Devices. Feb 23 17:50:09 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped dracut pre-trigger hook. Feb 23 17:50:09 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 23 17:50:09 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Feb 23 17:50:09 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 23 17:50:09 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Closed udev Control Socket. Feb 23 17:50:09 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Closed udev Kernel Socket. Feb 23 17:50:09 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped dracut pre-udev hook. Feb 23 17:50:09 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped dracut cmdline hook. Feb 23 17:50:09 localhost systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped dracut ask for additional cmdline parameters. Feb 23 17:50:09 localhost systemd[1]: Starting Cleanup udev Database... Feb 23 17:50:09 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Feb 23 17:50:09 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Create List of Static Device Nodes. Feb 23 17:50:09 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Create System Users. Feb 23 17:50:09 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Setup Virtual Console. Feb 23 17:50:09 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab.0-etc.mount: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: sysroot-ostree-deploy-rhcos-deploy-b679e409fd4d0b9b8b4cecc99237ef45283828a8e0fbf38395fc6bc6e27ea0ab.0.mount: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Finished Cleanup udev Database. Feb 23 17:50:09 localhost systemd[1]: Reached target Switch Root. Feb 23 17:50:09 localhost systemd[1]: Starting Switch Root... Feb 23 17:50:09 localhost systemd[1]: Switching root. Feb 23 17:50:09 localhost systemd-journald[340]: Received SIGTERM from PID 1 (systemd). Feb 23 17:50:09 localhost kernel: audit: type=1404 audit(1677174608.689:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Feb 23 17:50:09 localhost kernel: SELinux: policy capability network_peer_controls=1 Feb 23 17:50:09 localhost kernel: SELinux: policy capability open_perms=1 Feb 23 17:50:09 localhost kernel: SELinux: policy capability extended_socket_class=1 Feb 23 17:50:09 localhost kernel: SELinux: policy capability always_check_network=0 Feb 23 17:50:09 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Feb 23 17:50:09 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 23 17:50:09 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Feb 23 17:50:09 localhost kernel: audit: type=1403 audit(1677174608.792:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 23 17:50:09 localhost systemd[1]: Successfully loaded SELinux policy in 117.137ms. Feb 23 17:50:09 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.066ms. Feb 23 17:50:09 localhost systemd[1]: systemd 252-4.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 23 17:50:09 localhost systemd[1]: Detected virtualization amazon. Feb 23 17:50:09 localhost systemd[1]: Detected architecture x86-64. Feb 23 17:50:09 localhost systemd-rc-local-generator[742]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 23 17:50:09 localhost coreos-platform-chrony: Updated chrony to use aws configuration /run/coreos-platform-chrony.conf Feb 23 17:50:09 localhost systemd[724]: /usr/lib/systemd/system-generators/podman-system-generator failed with exit status 1. Feb 23 17:50:09 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped Switch Root. Feb 23 17:50:09 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 23 17:50:09 localhost systemd[1]: Created slice Slice /system/getty. Feb 23 17:50:09 localhost systemd[1]: Created slice Slice /system/modprobe. Feb 23 17:50:09 localhost systemd[1]: Created slice Slice /system/serial-getty. Feb 23 17:50:09 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Feb 23 17:50:09 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Feb 23 17:50:09 localhost systemd[1]: Created slice User and Session Slice. Feb 23 17:50:09 localhost systemd[1]: Started Forward Password Requests to Clevis Directory Watch. Feb 23 17:50:09 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Feb 23 17:50:09 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Feb 23 17:50:09 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Feb 23 17:50:09 localhost systemd[1]: Reached target Synchronize afterburn-sshkeys@.service template instances. Feb 23 17:50:09 localhost systemd[1]: Reached target Local Encrypted Volumes (Pre). Feb 23 17:50:09 localhost systemd[1]: Reached target Local Encrypted Volumes. Feb 23 17:50:09 localhost systemd[1]: Stopped target Switch Root. Feb 23 17:50:09 localhost systemd[1]: Stopped target Initrd File Systems. Feb 23 17:50:09 localhost systemd[1]: Stopped target Initrd Root File System. Feb 23 17:50:09 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Feb 23 17:50:09 localhost systemd[1]: Reached target Host and Network Name Lookups. Feb 23 17:50:09 localhost systemd[1]: Reached target Slice Units. Feb 23 17:50:09 localhost systemd[1]: Reached target Swaps. Feb 23 17:50:09 localhost systemd[1]: Reached target Local Verity Protected Volumes. Feb 23 17:50:09 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Feb 23 17:50:09 localhost systemd[1]: Listening on LVM2 poll daemon socket. Feb 23 17:50:09 localhost systemd[1]: multipathd control socket was skipped because of an unmet condition check (ConditionPathExists=/etc/multipath.conf). Feb 23 17:50:09 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Feb 23 17:50:09 localhost systemd[1]: Reached target RPC Port Mapper. Feb 23 17:50:09 localhost systemd[1]: Listening on Process Core Dump Socket. Feb 23 17:50:09 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Feb 23 17:50:09 localhost systemd[1]: Listening on udev Control Socket. Feb 23 17:50:09 localhost systemd[1]: Listening on udev Kernel Socket. Feb 23 17:50:09 localhost systemd[1]: Mounting Huge Pages File System... Feb 23 17:50:09 localhost systemd[1]: Mounting POSIX Message Queue File System... Feb 23 17:50:09 localhost systemd[1]: Mounting Kernel Debug File System... Feb 23 17:50:09 localhost systemd[1]: Mounting Kernel Trace File System... Feb 23 17:50:09 localhost systemd[1]: Mounting Temporary Directory /tmp... Feb 23 17:50:09 localhost systemd[1]: Starting CoreOS: Set printk To Level 4 (warn)... Feb 23 17:50:09 localhost systemd[1]: Ignition (delete config) was skipped because of an unmet condition check (ConditionFirstBoot=true). Feb 23 17:50:09 localhost systemd[1]: Starting Create List of Static Device Nodes... Feb 23 17:50:09 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Feb 23 17:50:09 localhost systemd[1]: Starting Load Kernel Module configfs... Feb 23 17:50:09 localhost systemd[1]: Starting Load Kernel Module drm... Feb 23 17:50:09 localhost systemd[1]: Starting Load Kernel Module efi_pstore... Feb 23 17:50:09 localhost systemd[1]: Starting Load Kernel Module fuse... Feb 23 17:50:09 localhost systemd[1]: ostree-prepare-root.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped OSTree Prepare OS/. Feb 23 17:50:09 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Stopped File System Check on Root Device. Feb 23 17:50:09 localhost systemd[1]: Stopped Journal Service. Feb 23 17:50:09 localhost systemd[1]: Starting Journal Service... Feb 23 17:50:09 localhost systemd[1]: Starting Load Kernel Modules... Feb 23 17:50:09 localhost systemd[1]: Starting Generate network units from Kernel command line... Feb 23 17:50:09 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Feb 23 17:50:09 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Feb 23 17:50:09 localhost systemd[1]: Starting Coldplug All udev Devices... Feb 23 17:50:09 localhost systemd[1]: Mounted Huge Pages File System. Feb 23 17:50:09 localhost systemd-journald[813]: Journal started Feb 23 17:50:09 localhost systemd-journald[813]: Runtime Journal (/run/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 8.0M, max 314.6M, 306.6M free. Feb 23 17:50:09 localhost systemd[1]: Queued start job for default target Graphical Interface. Feb 23 17:50:09 localhost systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 23 17:50:09 localhost systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 23 17:50:09 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd-modules-load[814]: Module 'msr' is built in Feb 23 17:50:09 localhost systemd[1]: Started Journal Service. Feb 23 17:50:09 localhost systemd[1]: Mounted POSIX Message Queue File System. Feb 23 17:50:09 localhost systemd-modules-load[814]: Inserted module 'ip_tables' Feb 23 17:50:09 localhost kernel: Warning: Deprecated Driver is detected: iptables will not be maintained in a future major release and may be disabled Feb 23 17:50:09 localhost kernel: ACPI: bus type drm_connector registered Feb 23 17:50:09 localhost systemd[1]: Mounted Kernel Debug File System. Feb 23 17:50:09 localhost systemd[1]: Mounted Kernel Trace File System. Feb 23 17:50:09 localhost systemd[1]: Mounted Temporary Directory /tmp. Feb 23 17:50:09 localhost systemd[1]: Finished CoreOS: Set printk To Level 4 (warn). Feb 23 17:50:09 localhost systemd[1]: Finished Create List of Static Device Nodes. Feb 23 17:50:09 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Feb 23 17:50:09 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Finished Load Kernel Module configfs. Feb 23 17:50:09 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Finished Load Kernel Module drm. Feb 23 17:50:09 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Finished Load Kernel Module efi_pstore. Feb 23 17:50:09 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 23 17:50:09 localhost systemd[1]: Finished Load Kernel Module fuse. Feb 23 17:50:09 localhost systemd[1]: Finished Load Kernel Modules. Feb 23 17:50:09 localhost systemd[1]: Finished Generate network units from Kernel command line. Feb 23 17:50:09 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Feb 23 17:50:09 localhost systemd[1]: Mounting FUSE Control File System... Feb 23 17:50:09 localhost systemd[1]: Mounting Kernel Configuration File System... Feb 23 17:50:09 localhost systemd[1]: Special handling of early boot iSCSI sessions was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/iscsi_session). Feb 23 17:50:09 localhost systemd[1]: Starting Rebuild Hardware Database... Feb 23 17:50:09 localhost systemd[1]: Starting Apply Kernel Variables... Feb 23 17:50:09 localhost systemd[1]: Starting Create System Users... Feb 23 17:50:09 localhost systemd[1]: Finished Coldplug All udev Devices. Feb 23 17:50:09 localhost systemd[1]: Mounted FUSE Control File System. Feb 23 17:50:09 localhost systemd[1]: Mounted Kernel Configuration File System. Feb 23 17:50:09 localhost systemd[1]: Starting Wait for udev To Complete Device Initialization... Feb 23 17:50:09 localhost udevadm[834]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in. Feb 23 17:50:09 localhost systemd[1]: Finished Apply Kernel Variables. Feb 23 17:50:09 localhost systemd-sysusers[831]: Creating group 'sgx' with GID 991. Feb 23 17:50:09 localhost systemd-sysusers[831]: Creating group 'systemd-oom' with GID 990. Feb 23 17:50:09 localhost systemd-sysusers[831]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 990 and GID 990. Feb 23 17:50:10 localhost systemd[1]: Finished Create System Users. Feb 23 17:50:10 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Feb 23 17:50:10 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Feb 23 17:50:10 localhost systemd[1]: Finished Rebuild Hardware Database. Feb 23 17:50:10 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 23 17:50:10 localhost systemd-udevd[841]: Using default interface naming scheme 'rhel-9.0'. Feb 23 17:50:10 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 23 17:50:10 localhost systemd[1]: Starting Load Kernel Module configfs... Feb 23 17:50:10 localhost systemd[1]: Starting Load Kernel Module fuse... Feb 23 17:50:10 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 23 17:50:10 localhost systemd[1]: Finished Load Kernel Module configfs. Feb 23 17:50:10 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 23 17:50:10 localhost systemd[1]: Finished Load Kernel Module fuse. Feb 23 17:50:10 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Feb 23 17:50:10 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 23 17:50:10 localhost systemd[1]: Condition check resulted in Amazon Elastic Block Store boot being skipped. Feb 23 17:50:10 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input5 Feb 23 17:50:10 localhost kernel: parport_pc 00:03: reported by Plug and Play ACPI Feb 23 17:50:10 localhost kernel: ppdev: user-space parallel port driver Feb 23 17:50:10 localhost kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 23 17:50:10 localhost systemd[1]: Finished Wait for udev To Complete Device Initialization. Feb 23 17:50:10 localhost systemd[1]: Device-Mapper Multipath Device Controller was skipped because of an unmet condition check (ConditionPathExists=/etc/multipath.conf). Feb 23 17:50:10 localhost systemd[1]: Reached target Preparation for Local File Systems. Feb 23 17:50:10 localhost systemd[1]: var.mount: Directory /var to mount over is not empty, mounting anyway. Feb 23 17:50:10 localhost systemd[1]: Mounting /var... Feb 23 17:50:10 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f... Feb 23 17:50:10 localhost systemd[1]: Mounted /var. Feb 23 17:50:10 localhost systemd[1]: Starting OSTree Remount OS/ Bind Mounts... Feb 23 17:50:10 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 23 17:50:10 localhost systemd[1]: Finished OSTree Remount OS/ Bind Mounts. Feb 23 17:50:10 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Feb 23 17:50:10 localhost systemd[1]: Starting Load/Save Random Seed... Feb 23 17:50:10 localhost systemd-journald[813]: Time spent on flushing to /var/log/journal/ec2d456b0a3e28d0eb2f198315e90643 is 250.204ms for 926 entries. Feb 23 17:50:10 localhost systemd-journald[813]: System Journal (/var/log/journal/ec2d456b0a3e28d0eb2f198315e90643) is 34.3M, max 4.0G, 3.9G free. Feb 23 17:50:11 localhost systemd-journald[813]: Received client request to flush runtime journal. Feb 23 17:50:11 localhost kernel: EXT4-fs (nvme0n1p3): mounted filesystem with ordered data mode. Quota mode: none. Feb 23 17:50:11 localhost systemd-journald[813]: /var/log/journal/ec2d456b0a3e28d0eb2f198315e90643/system.journal uses an outdated header, suggesting rotation. Feb 23 17:50:11 localhost systemd-journald[813]: /var/log/journal/ec2d456b0a3e28d0eb2f198315e90643/system.journal: Journal header limits reached or header out-of-date, rotating. Feb 23 17:50:10 localhost systemd[1]: Finished Load/Save Random Seed. Feb 23 17:50:11 localhost systemd-fsck[898]: boot: clean, 329/98304 files, 228040/393216 blocks Feb 23 17:50:10 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/54e5ab65-ff73-4a26-8c44-2a9765abf45f. Feb 23 17:50:10 localhost systemd[1]: Mounting CoreOS Dynamic Mount for /boot... Feb 23 17:50:10 localhost systemd[1]: Mounted CoreOS Dynamic Mount for /boot. Feb 23 17:50:11 localhost bootctl[909]: Couldn't find EFI system partition, skipping. Feb 23 17:50:10 localhost systemd[1]: Reached target Local File Systems. Feb 23 17:50:10 localhost systemd[1]: Starting Run update-ca-trust... Feb 23 17:50:10 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Feb 23 17:50:10 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Feb 23 17:50:10 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 23 17:50:10 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 23 17:50:10 localhost systemd[1]: Starting Automatic Boot Loader Update... Feb 23 17:50:10 localhost systemd[1]: Finished Automatic Boot Loader Update. Feb 23 17:50:11 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Feb 23 17:50:11 localhost systemd[1]: Starting Create Volatile Files and Directories... Feb 23 17:50:11 localhost systemd-tmpfiles[917]: /usr/lib/tmpfiles.d/tmp.conf:12: Duplicate line for path "/var/tmp", ignoring. Feb 23 17:50:11 localhost systemd-tmpfiles[917]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Feb 23 17:50:11 localhost systemd-tmpfiles[917]: /usr/lib/tmpfiles.d/var.conf:19: Duplicate line for path "/var/cache", ignoring. Feb 23 17:50:11 localhost systemd-tmpfiles[917]: /usr/lib/tmpfiles.d/var.conf:21: Duplicate line for path "/var/lib", ignoring. Feb 23 17:50:11 localhost systemd-tmpfiles[917]: /usr/lib/tmpfiles.d/var.conf:23: Duplicate line for path "/var/spool", ignoring. Feb 23 17:50:11 localhost systemd-tmpfiles[917]: "/home" already exists and is not a directory. Feb 23 17:50:11 localhost systemd-tmpfiles[917]: "/srv" already exists and is not a directory. Feb 23 17:50:11 localhost systemd-tmpfiles[917]: "/root" already exists and is not a directory. Feb 23 17:50:11 localhost systemd[1]: Finished Run update-ca-trust. Feb 23 17:50:11 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Feb 23 17:50:11 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Feb 23 17:50:11 localhost systemd[1]: Finished Create Volatile Files and Directories. Feb 23 17:50:11 localhost systemd[1]: Starting Security Auditing Service... Feb 23 17:50:11 localhost systemd[1]: Starting RHEL CoreOS Rebuild SELinux Policy If Necessary... Feb 23 17:50:11 localhost systemd[1]: Starting RHCOS Fix SELinux Labeling For /usr/local/sbin... Feb 23 17:50:11 localhost rhcos-rebuild-selinux-policy[924]: RHEL_VERSION=9Assuming we have new enough ostree Feb 23 17:50:11 localhost systemd[1]: Starting Rebuild Journal Catalog... Feb 23 17:50:11 localhost chcon[926]: changing security context of '/usr/local/sbin' Feb 23 17:50:11 localhost systemd[1]: Finished RHEL CoreOS Rebuild SELinux Policy If Necessary. Feb 23 17:50:11 localhost systemd[1]: Finished RHCOS Fix SELinux Labeling For /usr/local/sbin. Feb 23 17:50:11 localhost systemd[1]: Finished Rebuild Journal Catalog. Feb 23 17:50:11 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Feb 23 17:50:11 localhost systemd[1]: Starting Update is Completed... Feb 23 17:50:11 localhost systemd[1]: Finished Update is Completed. Feb 23 17:50:11 localhost auditd[936]: No plugins found, not dispatching events Feb 23 17:50:11 localhost auditd[936]: Init complete, auditd 3.0.7 listening for events (startup state enable) Feb 23 17:50:11 localhost augenrules[939]: /sbin/augenrules: No change Feb 23 17:50:11 localhost augenrules[950]: No rules Feb 23 17:50:11 localhost augenrules[950]: enabled 1 Feb 23 17:50:11 localhost augenrules[950]: failure 1 Feb 23 17:50:11 localhost augenrules[950]: pid 936 Feb 23 17:50:11 localhost augenrules[950]: rate_limit 0 Feb 23 17:50:11 localhost augenrules[950]: backlog_limit 8192 Feb 23 17:50:11 localhost augenrules[950]: lost 0 Feb 23 17:50:11 localhost augenrules[950]: backlog 0 Feb 23 17:50:11 localhost augenrules[950]: backlog_wait_time 60000 Feb 23 17:50:11 localhost augenrules[950]: backlog_wait_time_actual 0 Feb 23 17:50:11 localhost augenrules[950]: enabled 1 Feb 23 17:50:11 localhost augenrules[950]: failure 1 Feb 23 17:50:11 localhost augenrules[950]: pid 936 Feb 23 17:50:11 localhost augenrules[950]: rate_limit 0 Feb 23 17:50:11 localhost augenrules[950]: backlog_limit 8192 Feb 23 17:50:11 localhost augenrules[950]: lost 0 Feb 23 17:50:11 localhost augenrules[950]: backlog 0 Feb 23 17:50:11 localhost augenrules[950]: backlog_wait_time 60000 Feb 23 17:50:11 localhost augenrules[950]: backlog_wait_time_actual 0 Feb 23 17:50:11 localhost augenrules[950]: enabled 1 Feb 23 17:50:11 localhost augenrules[950]: failure 1 Feb 23 17:50:11 localhost augenrules[950]: pid 936 Feb 23 17:50:11 localhost augenrules[950]: rate_limit 0 Feb 23 17:50:11 localhost augenrules[950]: backlog_limit 8192 Feb 23 17:50:11 localhost augenrules[950]: lost 0 Feb 23 17:50:11 localhost augenrules[950]: backlog 1 Feb 23 17:50:11 localhost augenrules[950]: backlog_wait_time 60000 Feb 23 17:50:11 localhost augenrules[950]: backlog_wait_time_actual 0 Feb 23 17:50:11 localhost systemd[1]: Started Security Auditing Service. Feb 23 17:50:11 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Feb 23 17:50:11 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Feb 23 17:50:11 localhost systemd[1]: Reached target System Initialization. Feb 23 17:50:11 localhost systemd[1]: Started OSTree Monitor Staged Deployment. Feb 23 17:50:11 localhost systemd[1]: Started Daily rotation of log files. Feb 23 17:50:11 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Feb 23 17:50:11 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Feb 23 17:50:11 localhost systemd[1]: Reached target Path Units. Feb 23 17:50:11 localhost systemd[1]: Reached target Timer Units. Feb 23 17:50:11 localhost systemd[1]: Listening on bootupd.socket. Feb 23 17:50:11 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Feb 23 17:50:11 localhost systemd[1]: Listening on Open-iSCSI iscsiuio Socket. Feb 23 17:50:11 localhost systemd[1]: Reached target Socket Units. Feb 23 17:50:11 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 23 17:50:11 localhost systemd[1]: Reached target Basic System. Feb 23 17:50:11 localhost systemd[1]: Cleans NetworkManager state generated by dracut was skipped because of an unmet condition check (ConditionPathExists=/var/lib/mco/nm-clean-initrd-state). Feb 23 17:50:11 localhost systemd[1]: Reached target Preparation for Network. Feb 23 17:50:11 localhost systemd[1]: Starting Afterburn (Metadata)... Feb 23 17:50:11 localhost systemd[1]: Starting NTP client/server... Feb 23 17:50:11 localhost systemd[1]: CoreOS Generate iSCSI Initiator Name was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Feb 23 17:50:11 localhost systemd[1]: CoreOS Delete Ignition Config From Hypervisor was skipped because no trigger condition checks were met. Feb 23 17:50:11 localhost systemd[1]: CoreOS Mark Ignition Boot Complete was skipped because of an unmet condition check (ConditionKernelCommandLine=ignition.firstboot). Feb 23 17:50:11 localhost systemd[1]: Starting Create Ignition Status Issue Files... Feb 23 17:50:11 localhost systemd[1]: Starting Generation of shadow ID ranges for CRI-O... Feb 23 17:50:11 localhost systemd[1]: Starting CRI-O Auto Update Script... Feb 23 17:50:11 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Feb 23 17:50:11 localhost systemd[1]: Started irqbalance daemon. Feb 23 17:50:11 localhost systemd[1]: Software RAID monitoring and management was skipped because of an unmet condition check (ConditionPathExists=/etc/mdadm.conf). Feb 23 17:50:11 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Feb 23 17:50:11 localhost systemd[1]: Auto-connect to subsystems on FC-NVME devices found during boot was skipped because of an unmet condition check (ConditionPathExists=/sys/class/fc/fc_udev_device/nvme_discovery). Feb 23 17:50:11 localhost systemd[1]: OSTree Complete Boot was skipped because no trigger condition checks were met. Feb 23 17:50:11 localhost systemd[1]: Read-Only Sysroot Migration was skipped because of an unmet condition check (ConditionPathIsReadWrite=/sysroot). Feb 23 17:50:11 localhost systemd[1]: Starting Open vSwitch Database Unit... Feb 23 17:50:11 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because no trigger condition checks were met. Feb 23 17:50:11 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because no trigger condition checks were met. Feb 23 17:50:11 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because no trigger condition checks were met. Feb 23 17:50:11 localhost systemd[1]: Reached target sshd-keygen.target. Feb 23 17:50:11 localhost systemd[1]: Starting Generate SSH keys snippet for display via console-login-helper-messages... Feb 23 17:50:11 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Feb 23 17:50:11 localhost systemd[1]: Reached target User and Group Name Lookups. Feb 23 17:50:11 localhost systemd[1]: Starting User Login Management... Feb 23 17:50:11 localhost systemd[1]: VGAuth Service for open-vm-tools was skipped because of an unmet condition check (ConditionVirtualization=vmware). Feb 23 17:50:11 localhost systemd[1]: Service for virtual machines hosted on VMware was skipped because of an unmet condition check (ConditionVirtualization=vmware). Feb 23 17:50:11 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Feb 23 17:50:11 localhost chown[991]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Feb 23 17:50:11 localhost chronyd[1040]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Feb 23 17:50:11 localhost chronyd[1040]: Frequency 0.529 +/- 0.046 ppm read from /var/lib/chrony/drift Feb 23 17:50:11 localhost chronyd[1040]: Loaded seccomp filter (level 2) Feb 23 17:50:11 localhost systemd[1]: Started NTP client/server. Feb 23 17:50:11 localhost systemd[1]: crio-subid.service: Deactivated successfully. Feb 23 17:50:11 localhost systemd[1]: Finished Generation of shadow ID ranges for CRI-O. Feb 23 17:50:11 localhost systemd[1]: Finished Generate SSH keys snippet for display via console-login-helper-messages. Feb 23 17:50:11 localhost systemd[1]: Finished Create Ignition Status Issue Files. Feb 23 17:50:11 localhost systemd-logind[985]: Watching system buttons on /dev/input/event0 (Power Button) Feb 23 17:50:11 localhost systemd-logind[985]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 23 17:50:11 localhost systemd-logind[985]: Watching system buttons on /dev/input/event2 (AT Translated Set 2 keyboard) Feb 23 17:50:11 localhost systemd-logind[985]: New seat seat0. Feb 23 17:50:11 localhost systemd[1]: Starting D-Bus System Message Bus... Feb 23 17:50:11 localhost afterburn[960]: Feb 23 17:50:11.856 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 23 17:50:11 localhost dbus-broker-launch[1056]: Looking up NSS user entry for 'dbus'... Feb 23 17:50:11 localhost dbus-broker-launch[1056]: NSS returned NAME 'dbus' and UID '81' Feb 23 17:50:11 localhost dbus-broker-launch[1056]: Looking up NSS user entry for 'polkitd'... Feb 23 17:50:11 localhost dbus-broker-launch[1056]: NSS returned NAME 'polkitd' and UID '999' Feb 23 17:50:11 localhost systemd[1]: Started D-Bus System Message Bus. Feb 23 17:50:11 localhost ovsdb-server[1072]: ovs|00002|stream_ssl|ERR|SSL_use_certificate_file: error:80000002:system library::No such file or directory Feb 23 17:50:11 localhost ovsdb-server[1072]: ovs|00003|stream_ssl|ERR|SSL_use_PrivateKey_file: error:10080002:BIO routines::system lib Feb 23 17:50:11 localhost ovs-ctl[1004]: Starting ovsdb-server. Feb 23 17:50:11 localhost dbus-broker-lau[1056]: Ready Feb 23 17:50:11 localhost systemd[1]: Started User Login Management. Feb 23 17:50:11 localhost ovs-vsctl[1074]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.3.0 Feb 23 17:50:12 localhost ovs-vsctl[1079]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.17.6 "external-ids:system-id=\"4004906b-6ca5-4a32-b3c0-bdcf1c128aba\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhcos\"" "system-version=\"4.13\"" Feb 23 17:50:12 localhost ovs-ctl[1004]: Configuring Open vSwitch system IDs. Feb 23 17:50:12 localhost ovs-vsctl[1086]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=localhost Feb 23 17:50:12 localhost ovs-ctl[1004]: Enabling remote OVSDB managers. Feb 23 17:50:12 localhost systemd[1]: Started Open vSwitch Database Unit. Feb 23 17:50:12 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Feb 23 17:50:12 localhost ovs-vsctl[1094]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port 9ac9106efc7becf Feb 23 17:50:12 localhost ovs-vsctl[1095]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port aa2f6c1cfe2015e Feb 23 17:50:12 localhost ovs-vsctl[1096]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port 13a3543931af50f Feb 23 17:50:12 localhost ovs-vsctl[1097]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port e35d890abd5d4b0 Feb 23 17:50:12 localhost ovs-vsctl[1098]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- del-port 904f3beae60de67 Feb 23 17:50:12 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Feb 23 17:50:12 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Feb 23 17:50:12 localhost kernel: Failed to create system directory openvswitch Feb 23 17:50:12 localhost kernel: Failed to create system directory openvswitch Feb 23 17:50:12 localhost kernel: openvswitch: Open vSwitch switching datapath Feb 23 17:50:12 localhost ovs-ctl[1138]: Inserting openvswitch module. Feb 23 17:50:12 localhost crio[965]: time="2023-02-23 17:50:12.294599156Z" level=info msg="Starting CRI-O, version: 1.26.1-4.rhaos4.13.gita78722c.el9, git: unknown(clean)" Feb 23 17:50:12 localhost systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1946761227-merged.mount: Deactivated successfully. Feb 23 17:50:12 localhost ovs-vswitchd[1145]: ovs|00007|stream_ssl|ERR|SSL_use_certificate_file: error:80000002:system library::No such file or directory Feb 23 17:50:12 localhost ovs-vswitchd[1145]: ovs|00008|stream_ssl|ERR|SSL_use_PrivateKey_file: error:10080002:BIO routines::system lib Feb 23 17:50:12 localhost ovs-vswitchd[1145]: ovs|00009|stream_ssl|ERR|failed to load client certificates from /ovn-ca/ca-bundle.crt: error:0A080002:SSL routines::system lib Feb 23 17:50:12 localhost kernel: device ovs-system entered promiscuous mode Feb 23 17:50:12 localhost kernel: Timeout policy base is empty Feb 23 17:50:12 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Feb 23 17:50:12 localhost kernel: device ens5 entered promiscuous mode Feb 23 17:50:12 localhost kernel: device ovn-k8s-mp0 entered promiscuous mode Feb 23 17:50:12 localhost kernel: device genev_sys_6081 entered promiscuous mode Feb 23 17:50:12 localhost kernel: device br-int entered promiscuous mode Feb 23 17:50:12 localhost ovs-ctl[1110]: Starting ovs-vswitchd. Feb 23 17:50:12 localhost ovs-vsctl[1172]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=localhost Feb 23 17:50:12 localhost ovs-ctl[1110]: Enabling remote OVSDB managers. Feb 23 17:50:12 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Feb 23 17:50:12 localhost systemd[1]: Starting Open vSwitch... Feb 23 17:50:12 localhost systemd[1]: Finished Open vSwitch. Feb 23 17:50:12 localhost systemd[1]: Starting Network Manager... Feb 23 17:50:12 localhost NetworkManager[1177]: [1677174612.5621] NetworkManager (version 1.42.0-1.el9) is starting... (boot:7e69ac5f-095f-4a9b-b24c-6e8366d55bca) Feb 23 17:50:12 localhost NetworkManager[1177]: [1677174612.5623] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-disable-default-plugins.conf, 20-client-id-from-mac.conf) (run: 15-carrier-timeout.conf) (etc: 20-keyfiles.conf, sdn.conf) Feb 23 17:50:12 localhost NetworkManager[1177]: [1677174612.5671] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Feb 23 17:50:12 localhost systemd[1]: Started Network Manager. Feb 23 17:50:12 localhost systemd[1]: Reached target Network. Feb 23 17:50:12 localhost systemd[1]: Starting Network Manager Wait Online... Feb 23 17:50:12 localhost systemd[1]: Update GCP routes for forwarded IPs. was skipped because no trigger condition checks were met. Feb 23 17:50:12 localhost systemd[1]: Starting OpenSSH server daemon... Feb 23 17:50:12 localhost NetworkManager[1177]: [1677174612.5796] manager[0x563721e18030]: monitoring kernel firmware directory '/lib/firmware'. Feb 23 17:50:12 localhost systemd[1]: Starting Hostname Service... Feb 23 17:50:12 localhost sshd[1181]: main: sshd: ssh-rsa algorithm is disabled Feb 23 17:50:12 localhost sshd[1181]: Server listening on 0.0.0.0 port 22. Feb 23 17:50:12 localhost sshd[1181]: Server listening on :: port 22. Feb 23 17:50:12 localhost systemd[1]: Started OpenSSH server daemon. Feb 23 17:50:12 localhost crio[965]: time="2023-02-23 17:50:12.617628845Z" level=info msg="Checking whether cri-o should wipe containers: open /var/run/crio/version: no such file or directory" Feb 23 17:50:12 localhost systemd[1]: crio-wipe.service: Deactivated successfully. Feb 23 17:50:12 localhost systemd[1]: Finished CRI-O Auto Update Script. Feb 23 17:50:12 localhost systemd[1]: Started Hostname Service. Feb 23 17:50:12 localhost NetworkManager[1177]: [1677174612.6904] hostname: hostname: using hostnamed Feb 23 17:50:12 localhost NetworkManager[1177]: [1677174612.6918] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Feb 23 17:50:12 localhost NetworkManager[1177]: [1677174612.6929] policy: set-hostname: set hostname to 'localhost.localdomain' (no hostname found) Feb 23 17:50:12 localhost.localdomain systemd-hostnamed[1186]: Hostname set to (transient) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.6999] manager[0x563721e18030]: rfkill: Wi-Fi hardware radio set enabled Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7000] manager[0x563721e18030]: rfkill: WWAN hardware radio set enabled Feb 23 17:50:12 localhost.localdomain systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7061] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.42.0-1.el9/libnm-device-plugin-ovs.so) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7086] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.0-1.el9/libnm-device-plugin-team.so) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7086] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7097] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7098] manager: Networking is enabled by state file Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7135] settings: Loaded settings plugin: keyfile (internal) Feb 23 17:50:12 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7170] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.0-1.el9/libnm-settings-plugin-ifcfg-rh.so") Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7217] dhcp: init: Using DHCP client 'internal' Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7218] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7230] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7234] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Feb 23 17:50:12 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7240] device (lo): Activation: starting connection 'lo' (749f9974-6e7f-442a-ac13-546c37530197) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7257] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/2) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7263] manager: (ovn-k8s-mp0): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7271] manager: (ens5): new Ethernet device (/org/freedesktop/NetworkManager/Devices/4) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7281] settings: (ens5): created default wired connection 'Wired connection 1' Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7282] device (ens5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7333] device (genev_sys_6081): carrier: link connected Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7336] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7368] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7370] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7370] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7371] device (ens5): carrier: link connected Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7379] manager: (patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7384] manager: (patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/7) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7389] manager: (ovn-7dfb31-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7392] manager: (ovn-5a9c4f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7396] manager: (patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7399] manager: (ovn-b823f7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7403] manager: (ovn-061a07-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7407] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7410] manager: (ens5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/14) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7414] manager: (ovn-72cfee-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7419] manager: (ovn-k8s-mp0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/16) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7424] manager: (patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7428] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7433] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/19) Feb 23 17:50:12 localhost.localdomain ovs-vswitchd[1145]: ovs|00054|bridge|INFO|bridge br-ex: deleted interface ens5 on port 1 Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7448] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7452] device (ens5): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Feb 23 17:50:12 localhost.localdomain kernel: device ens5 left promiscuous mode Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7558] policy: auto-activating connection 'Wired connection 1' (eb99b8bd-8e1f-3f41-845b-962703e428f7) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7562] device (ens5): Activation: starting connection 'Wired connection 1' (eb99b8bd-8e1f-3f41-845b-962703e428f7) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7563] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7565] manager: NetworkManager state is now CONNECTING Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7566] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7570] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7644] dhcp4 (ens5): activation: beginning transaction (timeout in 45 seconds) Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7670] dhcp4 (ens5): state changed new lease, address=10.0.136.68 Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7673] policy: set 'Wired connection 1' (ens5) as default for IPv4 routing and DNS Feb 23 17:50:12 localhost.localdomain NetworkManager[1177]: [1677174612.7676] policy: set-hostname: set hostname to 'ip-10-0-136-68' (from DHCPv4) Feb 23 17:50:12 ip-10-0-136-68 systemd-hostnamed[1186]: Hostname set to (transient) Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.7770] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1210]: Error: Device '' not found. Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1227]: Error: Device '' not found. Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1239]: Error: Device '' not found. Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + INTERFACE_NAME=lo Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + OPERATION=pre-up Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + '[' pre-up '!=' pre-up ']' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1245]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1246]: ++ awk -F : '{if($1=="lo" && $2!~/^ovs*/) print $NF}' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + INTERFACE_CONNECTION_UUID=749f9974-6e7f-442a-ac13-546c37530197 Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + '[' 749f9974-6e7f-442a-ac13-546c37530197 == '' ']' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1251]: ++ nmcli -t -f connection.slave-type conn show 749f9974-6e7f-442a-ac13-546c37530197 Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1252]: ++ awk -F : '{print $NF}' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + INTERFACE_OVS_SLAVE_TYPE= Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + '[' '' '!=' ovs-port ']' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1243]: + exit 0 Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.8545] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.8548] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.8552] device (lo): Activation: successful, device activated. Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.881 INFO Putting http://169.254.169.254/latest/api/token: Attempt #2 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.883 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.884 INFO Fetch successful Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.884 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.885 INFO Fetch successful Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.885 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.885 INFO Fetch successful Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.885 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.886 INFO Fetch failed with 404: resource not found Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.886 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.886 INFO Fetch failed with 404: resource not found Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.886 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.887 INFO Fetch successful Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.887 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.888 INFO Fetch successful Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.888 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.888 INFO Fetch successful Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.888 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.889 INFO Fetch failed with 404: resource not found Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.889 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 23 17:50:12 ip-10-0-136-68 afterburn[960]: Feb 23 17:50:12.889 INFO Fetch successful Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: Finished Afterburn (Metadata). Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1281]: Error: Device '' not found. Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + INTERFACE_NAME=ens5 Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + OPERATION=pre-up Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + '[' pre-up '!=' pre-up ']' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1287]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1288]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + INTERFACE_CONNECTION_UUID=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + '[' eb99b8bd-8e1f-3f41-845b-962703e428f7 == '' ']' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1293]: ++ nmcli -t -f connection.slave-type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1294]: ++ awk -F : '{print $NF}' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + INTERFACE_OVS_SLAVE_TYPE= Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + '[' '' '!=' ovs-port ']' Feb 23 17:50:12 ip-10-0-136-68 nm-dispatcher[1285]: + exit 0 Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.9315] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.9317] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.9319] manager: NetworkManager state is now CONNECTED_SITE Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.9321] device (ens5): Activation: successful, device activated. Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.9324] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 17:50:12 ip-10-0-136-68 NetworkManager[1177]: [1677174612.9327] manager: startup complete Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: Finished Network Manager Wait Online. Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: Starting Fetch kubelet node name from AWS Metadata... Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: Starting Fetch kubelet provider id from AWS Metadata... Feb 23 17:50:12 ip-10-0-136-68 aws-kubelet-nodename[1304]: Not replacing existing /etc/systemd/system/kubelet.service.d/20-aws-node-name.conf Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: Starting Configures OVS with proper host networking configuration... Feb 23 17:50:12 ip-10-0-136-68 aws-kubelet-providerid[1305]: Not replacing existing /etc/systemd/system/kubelet.service.d/20-aws-providerid.conf Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: aws-kubelet-nodename.service: Deactivated successfully. Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: Finished Fetch kubelet node name from AWS Metadata. Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: aws-kubelet-providerid.service: Deactivated successfully. Feb 23 17:50:12 ip-10-0-136-68 systemd[1]: Finished Fetch kubelet provider id from AWS Metadata. Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + touch /var/run/ovs-config-executed Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + NM_CONN_ETC_PATH=/etc/NetworkManager/system-connections Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + NM_CONN_RUN_PATH=/run/NetworkManager/system-connections Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + NM_CONN_CONF_PATH=/etc/NetworkManager/system-connections Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + NM_CONN_SET_PATH=/run/NetworkManager/system-connections Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + nm_config_changed=0 Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_SUFFIX=-slave-ovs-clone Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + BRIDGE_METRIC=48 Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + BRIDGE1_METRIC=49 Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + trap handle_exit EXIT Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' /run/NetworkManager/system-connections '!=' /etc/NetworkManager/system-connections ']' Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' /run/NetworkManager/system-connections '!=' /run/NetworkManager/system-connections ']' Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + systemctl -q is-enabled mtu-migration Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Cleaning up left over mtu migration configuration' Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: Cleaning up left over mtu migration configuration Feb 23 17:50:12 ip-10-0-136-68 configure-ovs.sh[1307]: + rm -rf /etc/cno/mtu-migration Feb 23 17:50:12 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00001|ofproto_dpif_xlate(handler1)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing arp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=ff:ff:ff:ff:ff:ff,arp_spa=10.129.2.1,arp_tpa=10.129.2.5,arp_op=1,arp_sha=0a:58:0a:81:02:01,arp_tha=00:00:00:00:00:00 Feb 23 17:50:12 ip-10-0-136-68 systemctl[1310]: Failed to get unit file state for mtu-migration.service: No such file or directory Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1316]: + rpm -qa Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1317]: + grep -q openvswitch Feb 23 17:50:13 ip-10-0-136-68 nm-dispatcher[1366]: Error: Device '' not found. Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + print_state Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Current device, connection, interface and routing state:' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: Current device, connection, interface and routing state: Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1371]: + grep -v unmanaged Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1370]: + nmcli -g all device Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1371]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/4:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/2 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1371]: lo:loopback:connected (externally):limited:limited:/org/freedesktop/NetworkManager/Devices/1:lo:749f9974-6e7f-442a-ac13-546c37530197:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli -g all connection Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1375]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677174612:Thu Feb 23 17\:50\:12 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/2:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/2::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1375]: lo:749f9974-6e7f-442a-ac13-546c37530197:loopback:1677174612:Thu Feb 23 17\:50\:12 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/1:yes:lo:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/lo.nmconnection Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ip -d address show Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: inet 127.0.0.1/8 scope host lo Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: valid_lft forever preferred_lft forever Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: inet6 ::1/128 scope host Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: valid_lft forever preferred_lft forever Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:00:05.0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: altname enp0s5 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: valid_lft 3600sec preferred_lft 3600sec Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: valid_lft forever preferred_lft forever Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: 3: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: link/ether ee:b5:a6:bc:8d:2c brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: 4: ovn-k8s-mp0: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: link/ether 2e:5d:2b:01:25:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: 5: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: link/ether 3e:ed:17:7f:65:44 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Feb 23 17:50:13 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck1848770488-merged.mount: Deactivated successfully. Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1400]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: inet6 fe80::3ced:17ff:fe7f:6544/64 scope link tentative Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: valid_lft forever preferred_lft forever Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: 6: br-int: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: link/ether 1e:70:f2:fd:64:95 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1379]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:13 ip-10-0-136-68 NetworkManager[1177]: [1677174613.3630] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:13 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00055|bridge|INFO|bridge br-ex: deleted interface patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int on port 2 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ip route show Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1409]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1380]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1380]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1451]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ip -6 route show Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1454]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1381]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1381]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1381]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' OVNKubernetes == OVNKubernetes ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ovnk_config_dir=/etc/ovnk Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ovnk_var_dir=/var/lib/ovnk Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + extra_bridge_file=/etc/ovnk/extra_bridge Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + mkdir -p /etc/ovnk Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + mkdir -p /var/lib/ovnk Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ get_iface_default_hint /var/lib/ovnk/iface_default_hint Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 17:50:13 ip-10-0-136-68 NetworkManager[1177]: [1677174613.5573] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/20) Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1481]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1385]: +++ cat /var/lib/ovnk/iface_default_hint Feb 23 17:50:13 ip-10-0-136-68 NetworkManager[1177]: [1677174613.5574] audit: op="connection-add" uuid="13489ac6-b2bc-4cc7-8035-a6c6f3ece4df" name="br-ex" pid=1482 uid=0 result="success" Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1490]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-port br-ex ens5 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ local iface_default_hint=ens5 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ '[' ens5 '!=' '' ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ '[' ens5 '!=' br-ex ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ '[' ens5 '!=' br-ex1 ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ '[' -d /sys/class/net/ens5 ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ echo ens5 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1384]: ++ return Feb 23 17:50:13 ip-10-0-136-68 NetworkManager[1177]: [1677174613.5925] manager: (ens5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21) Feb 23 17:50:13 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00001|ofproto_dpif_xlate(handler8)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:04,nw_src=10.131.0.33,nw_dst=10.129.2.4,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=46790,tp_dst=9154,tcp_flags=syn Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + iface_default_hint=ens5 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ens5 == '' ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' -f /run/configure-ovs-boot-done ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Running on boot, restoring previous configuration before proceeding...' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: Running on boot, restoring previous configuration before proceeding... Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + rollback_nm Feb 23 17:50:13 ip-10-0-136-68 NetworkManager[1177]: [1677174613.5926] audit: op="connection-add" uuid="10b83248-bb63-4adb-953a-b09ec4a7297d" name="ovs-port-phys0" pid=1491 uid=0 result="success" Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1499]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-port br-ex br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1386]: ++ get_bridge_physical_interface ovs-if-phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1386]: ++ local bridge_interface=ovs-if-phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1386]: ++ local physical_interface= Feb 23 17:50:13 ip-10-0-136-68 NetworkManager[1177]: [1677174613.6341] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22) Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1387]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1387]: +++ echo '' Feb 23 17:50:13 ip-10-0-136-68 NetworkManager[1177]: [1677174613.6343] audit: op="connection-add" uuid="89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26" name="ovs-port-br-ex" pid=1500 uid=0 result="success" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1386]: ++ physical_interface= Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1386]: ++ echo '' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + phys0= Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1392]: ++ get_bridge_physical_interface ovs-if-phys1 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1392]: ++ local bridge_interface=ovs-if-phys1 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1392]: ++ local physical_interface= Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1393]: +++ nmcli -g connection.interface-name conn show ovs-if-phys1 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1393]: +++ echo '' Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1520]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists destroy interface ens5 Feb 23 17:50:13 ip-10-0-136-68 NetworkManager[1177]: [1677174613.7166] audit: op="connection-add" uuid="7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41" name="ovs-if-phys0" pid=1521 uid=0 result="success" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1392]: ++ physical_interface= Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1392]: ++ echo '' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + phys1= Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + remove_all_ovn_bridges Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Reverting any previous OVS configuration' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: Reverting any previous OVS configuration Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + remove_ovn_bridges br-ex phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_conf_files br-ex phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/etc/NetworkManager/system-connections Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1398]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_set_files br-ex phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/run/NetworkManager/system-connections Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys0 Feb 23 17:50:13 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1399]: ++ echo /run/NetworkManager/system-connections/br-ex /run/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0 /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0 /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:13 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00002|ofproto_dpif_xlate(handler8)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing arp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=ff:ff:ff:ff:ff:ff,arp_spa=10.129.2.1,arp_tpa=10.129.2.21,arp_op=1,arp_sha=0a:58:0a:81:02:01,arp_tha=00:00:00:00:00:00 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/br-ex ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + remove_ovn_bridges br-ex1 phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/etc/NetworkManager/system-connections Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:14 ip-10-0-136-68 kernel: device ens5 entered promiscuous mode Feb 23 17:50:13 ip-10-0-136-68 ovs-vsctl[1601]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists destroy interface br-ex Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.0824] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23) Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1407]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1677]: Error: Device '' not found. Feb 23 17:50:14 ip-10-0-136-68 ovs-vsctl[1637]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 --if-exists del-br br0 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.0825] audit: op="connection-add" uuid="ed5e11f6-e938-4c92-9d73-c35d5035e9f5" name="ovs-if-br-ex" pid=1620 uid=0 result="success" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_set_files br-ex1 phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex1 phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/run/NetworkManager/system-connections Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1685]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1685]: + INTERFACE_NAME=br-ex Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1685]: + OPERATION=pre-up Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1685]: + '[' pre-up '!=' pre-up ']' Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00056|bridge|INFO|bridge br-ex: added interface ens5 on port 1 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2233] agent-manager: agent[05b8fa65b1dc5c36,:1.74/nmcli-connect/0]: agent registered Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1408]: ++ echo /run/NetworkManager/system-connections/br-ex1 /run/NetworkManager/system-connections/br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex1 /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex1 /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-phys1 /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection /run/NetworkManager/system-connections/ovs-port-phys1 /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1687]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00057|bridge|INFO|bridge br-ex: using datapath ID 0000f63aa5df6e47 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2239] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/br-ex1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'OVS configuration successfully reverted' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: OVS configuration successfully reverted Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + reload_profiles_nm '' '' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 0 -eq 0 ']' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + return Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + print_state Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Current device, connection, interface and routing state:' Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: Current device, connection, interface and routing state: Feb 23 17:50:14 ip-10-0-136-68 kernel: device ens5 left promiscuous mode Feb 23 17:50:14 ip-10-0-136-68 kernel: device ens5 entered promiscuous mode Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1688]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00058|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2245] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1410]: + nmcli -g all device Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1685]: + INTERFACE_CONNECTION_UUID= Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1685]: + '[' '' == '' ']' Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1685]: + exit 0 Feb 23 17:50:14 ip-10-0-136-68 ovs-vsctl[1735]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2248] device (br-ex): Activation: starting connection 'br-ex' (13489ac6-b2bc-4cc7-8035-a6c6f3ece4df) Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1411]: + grep -v unmanaged Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1411]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/4:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/2 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1411]: lo:loopback:connected (externally):limited:limited:/org/freedesktop/NetworkManager/Devices/1:lo:749f9974-6e7f-442a-ac13-546c37530197:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + INTERFACE_NAME=ens5 Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + OPERATION=pre-up Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + '[' pre-up '!=' pre-up ']' Feb 23 17:50:14 ip-10-0-136-68 chronyd[1040]: Source 169.254.169.123 offline Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2248] audit: op="connection-activate" uuid="13489ac6-b2bc-4cc7-8035-a6c6f3ece4df" name="br-ex" pid=1663 uid=0 result="success" Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli -g all connection Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1695]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00059|bridge|INFO|bridge br-ex: deleted interface ens5 on port 1 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2250] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1415]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677174612:Thu Feb 23 17\:50\:12 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/2:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/2::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1415]: lo:749f9974-6e7f-442a-ac13-546c37530197:loopback:1677174612:Thu Feb 23 17\:50\:12 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/1:yes:lo:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/lo.nmconnection Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1696]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00060|bridge|INFO|bridge br-ex: added interface ens5 on port 1 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2253] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ip -d address show Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + INTERFACE_CONNECTION_UUID=7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + '[' 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 == '' ']' Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00061|bridge|INFO|bridge br-ex: using datapath ID 00007a1625de4045 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2256] device (br-ex): Activation: starting connection 'ovs-port-br-ex' (89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26) Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: inet 127.0.0.1/8 scope host lo Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: valid_lft forever preferred_lft forever Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: inet6 ::1/128 scope host Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: valid_lft forever preferred_lft forever Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:00:05.0 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: altname enp0s5 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: valid_lft 3600sec preferred_lft 3600sec Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: valid_lft forever preferred_lft forever Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: 3: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: link/ether ee:b5:a6:bc:8d:2c brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: 4: ovn-k8s-mp0: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: link/ether 2e:5d:2b:01:25:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: 5: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: link/ether 3e:ed:17:7f:65:44 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1703]: ++ nmcli -t -f connection.slave-type conn show 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00062|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2258] device (ens5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: inet6 fe80::3ced:17ff:fe7f:6544/64 scope link tentative Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: valid_lft forever preferred_lft forever Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: 6: br-int: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: link/ether 1e:70:f2:fd:64:95 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1419]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1704]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 ovs-vsctl[1830]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2262] device (ens5): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ip route show Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + '[' ovs-port '!=' ovs-port ']' Feb 23 17:50:14 ip-10-0-136-68 ovs-vsctl[1965]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2265] device (ens5): Activation: starting connection 'ovs-port-phys0' (10b83248-bb63-4adb-953a-b09ec4a7297d) Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1420]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1420]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1709]: ++ nmcli -t -f connection.master conn show 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2266] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + ip -6 route show Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1710]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2268] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1421]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1421]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1421]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + PORT=10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1692]: + '[' 10b83248-bb63-4adb-953a-b09ec4a7297d == '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2270] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1307]: + touch /run/configure-ovs-boot-done Feb 23 17:50:14 ip-10-0-136-68 nm-dispatcher[1715]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2271] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1423]: ++ get_nodeip_interface /var/lib/ovnk/iface_default_hint /etc/ovnk/extra_bridge /run/nodeip-configuration/primary-ip Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1423]: ++ local iface= Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1423]: ++ local counter=0 Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1423]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1423]: ++ local extra_bridge_file=/etc/ovnk/extra_bridge Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1423]: ++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1423]: ++ local extra_bridge= Feb 23 17:50:14 ip-10-0-136-68 configure-ovs.sh[1423]: ++ '[' -f /etc/ovnk/extra_bridge ']' Feb 23 17:50:15 ip-10-0-136-68 kernel: device br-ex entered promiscuous mode Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1716]: ++ awk -F : '{if( ($1=="10b83248-bb63-4adb-953a-b09ec4a7297d" || $3=="10b83248-bb63-4adb-953a-b09ec4a7297d") && $2~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2274] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00063|netdev|WARN|failed to set MTU for network device br-ex: No such device Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1424]: +++ get_nodeip_hint_interface /run/nodeip-configuration/primary-ip '' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1424]: +++ local ip_hint= Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1424]: +++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1424]: +++ local extra_bridge= Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1424]: +++ local iface= Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + PORT_CONNECTION_UUID=10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + '[' 10b83248-bb63-4adb-953a-b09ec4a7297d == '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2275] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00064|bridge|INFO|bridge br-ex: added interface br-ex on port 65534 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1425]: ++++ get_ip_from_ip_hint_file /run/nodeip-configuration/primary-ip Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1425]: ++++ local ip_hint_file=/run/nodeip-configuration/primary-ip Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1425]: ++++ [[ ! -f /run/nodeip-configuration/primary-ip ]] Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1425]: ++++ return Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1721]: ++ nmcli -t -f connection.slave-type conn show 10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2275] device (br-ex): Activation: connection 'ovs-port-br-ex' enslaved, continuing activation Feb 23 17:50:14 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00065|bridge|INFO|bridge br-ex: using datapath ID 000002ea92f9d3f3 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1424]: +++ ip_hint= Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1424]: +++ [[ -z '' ]] Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1424]: +++ return Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1722]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2277] device (ens5): disconnecting for new activation request. Feb 23 17:50:15 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00001|ofproto_dpif_xlate(handler19)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing arp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=ff:ff:ff:ff:ff:ff,arp_spa=10.129.2.1,arp_tpa=10.129.2.5,arp_op=1,arp_sha=0a:58:0a:81:02:01,arp_tha=00:00:00:00:00:00 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ iface= Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ [[ -n '' ]] Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ '[' 0 -lt 12 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ '[' '' '!=' '' ']' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2278] device (ens5): state change: activated -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1427]: +++ ip route show default Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1727]: ++ nmcli -t -f connection.master conn show 10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2279] manager: NetworkManager state is now CONNECTING Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1428]: +++ grep -v br-ex1 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1728]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2284] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1429]: +++ awk '{ if ($4 == "dev") { print $5; exit } }' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + BRIDGE_NAME=br-ex Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + '[' br-ex '!=' br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + ovs-vsctl list interface ens5 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + declare -A INTERFACES Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + '[' '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2287] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ iface=ens5 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ [[ -n ens5 ]] Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ break Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ '[' ens5 '!=' br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ '[' ens5 '!=' br-ex1 ']' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1733]: ++ get_interface_ofport_request Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1733]: ++ declare -A ofport_requests Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2288] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 chronyd[1040]: Source 169.254.169.123 online Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ get_iface_default_hint /var/lib/ovnk/iface_default_hint Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ '[' -f /var/lib/ovnk/iface_default_hint ']' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1734]: +++ ovs-vsctl get Interface ens5 ofport Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2288] device (ens5): Activation: connection 'ovs-port-phys0' enslaved, continuing activation Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1431]: ++++ cat /var/lib/ovnk/iface_default_hint Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1733]: ++ local current_ofport=1 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1733]: ++ '[' '' ']' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1733]: ++ echo 1 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1733]: ++ return Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2289] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ local iface_default_hint=ens5 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ '[' ens5 '!=' '' ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ '[' ens5 '!=' br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ '[' ens5 '!=' br-ex1 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ '[' -d /sys/class/net/ens5 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ echo ens5 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1430]: +++ return Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + INTERFACES[$INTERFACE_NAME]=1 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1692]: + declare -p INTERFACES Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2291] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ iface_default_hint=ens5 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ '[' ens5 '!=' '' ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ '[' ens5 '!=' ens5 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ '[' ens5 '!=' '' ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ write_iface_default_hint /var/lib/ovnk/iface_default_hint ens5 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ local iface_default_hint_file=/var/lib/ovnk/iface_default_hint Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ local iface=ens5 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ echo ens5 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1423]: ++ echo ens5 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1737]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1737]: + INTERFACE_NAME=br-ex Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1737]: + OPERATION=pre-up Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1737]: + '[' pre-up '!=' pre-up ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2294] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + iface=ens5 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ens5 '!=' br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1740]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2327] device (ens5): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1432]: ++ nmcli connection show --active br-ex Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1741]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2452] dhcp4 (ens5): canceled DHCP transaction Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -z '' ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Bridge br-ex is not active, restoring previous configuration before proceeding...' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: Bridge br-ex is not active, restoring previous configuration before proceeding... Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + rollback_nm Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1737]: + INTERFACE_CONNECTION_UUID= Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1737]: + '[' '' == '' ']' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1737]: + exit 0 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2453] dhcp4 (ens5): activation: beginning transaction (timeout in 45 seconds) Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1437]: ++ get_bridge_physical_interface ovs-if-phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1437]: ++ local bridge_interface=ovs-if-phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1437]: ++ local physical_interface= Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + INTERFACE_NAME=ens5 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + OPERATION=pre-up Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + '[' pre-up '!=' pre-up ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2453] dhcp4 (ens5): state changed no lease Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1438]: +++ nmcli -g connection.interface-name conn show ovs-if-phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1438]: +++ echo '' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1776]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2561] device (ens5): Activation: starting connection 'ovs-if-phys0' (7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41) Feb 23 17:50:15 ip-10-0-136-68 mco-hostname[2146]: waiting for non-localhost hostname to be assigned Feb 23 17:50:15 ip-10-0-136-68 mco-hostname[2146]: node identified as ip-10-0-136-68 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1437]: ++ physical_interface= Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1437]: ++ echo '' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1777]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2585] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + phys0= Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + INTERFACE_CONNECTION_UUID=7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + '[' 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 == '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2589] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1443]: ++ get_bridge_physical_interface ovs-if-phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1443]: ++ local bridge_interface=ovs-if-phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1443]: ++ local physical_interface= Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1786]: ++ nmcli -t -f connection.slave-type conn show 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2593] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 sm-notify[2153]: Version 2.5.4 starting Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1444]: +++ nmcli -g connection.interface-name conn show ovs-if-phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1444]: +++ echo '' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1787]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2596] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1443]: ++ physical_interface= Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1443]: ++ echo '' Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + '[' ovs-port '!=' ovs-port ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2605] device (ens5): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + phys1= Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + remove_all_ovn_bridges Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Reverting any previous OVS configuration' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: Reverting any previous OVS configuration Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + remove_ovn_bridges br-ex phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_conf_files br-ex phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/etc/NetworkManager/system-connections Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1796]: ++ nmcli -t -f connection.master conn show 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2608] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1449]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:15 ip-10-0-136-68 rpc.statd[2169]: Version 2.5.4 starting Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1797]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2660] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_set_files br-ex phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/run/NetworkManager/system-connections Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys0 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.399270282Z" level=info msg="Starting CRI-O, version: 1.26.1-4.rhaos4.13.gita78722c.el9, git: unknown(clean)" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.402491886Z" level=info msg="Node configuration value for hugetlb cgroup is true" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.402504313Z" level=info msg="Node configuration value for pid cgroup is true" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.402544474Z" level=info msg="Node configuration value for memoryswap cgroup is true" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.402552271Z" level=info msg="Node configuration value for cgroup v2 is false" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.410075167Z" level=info msg="Node configuration value for systemd CollectMode is true" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.416828482Z" level=info msg="Node configuration value for systemd AllowedCPUs is true" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.423214379Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.503359728Z" level=info msg="Checkpoint/restore support disabled" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.503386702Z" level=info msg="Using seccomp default profile when unspecified: true" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.503406911Z" level=info msg="Using the internal default seccomp profile" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.503412227Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.503417606Z" level=info msg="No blockio config file specified, blockio not configured" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.503423458Z" level=info msg="RDT not available in the host system" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.509169864Z" level=info msg="Conmon does support the --sync option" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.509185346Z" level=info msg="Conmon does support the --log-global-size-max option" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.510660374Z" level=info msg="Conmon does support the --sync option" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.510668424Z" level=info msg="Conmon does support the --log-global-size-max option" Feb 23 17:50:15 ip-10-0-136-68 rpc.statd[2169]: Flags: TI-RPC Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + PORT=10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + '[' 10b83248-bb63-4adb-953a-b09ec4a7297d == '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2933] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1450]: ++ echo /run/NetworkManager/system-connections/br-ex /run/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0 /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0 /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1811]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2935] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + remove_ovn_bridges br-ex1 phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/etc/NetworkManager/system-connections Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:15 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00001|ofproto_dpif_xlate(handler14)|WARN|Invalid Geneve tunnel metadata on bridge br-int while processing tcp,in_port=2,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:04,nw_src=10.131.0.33,nw_dst=10.129.2.4,nw_tos=0,nw_ecn=0,nw_ttl=63,nw_frag=no,tp_src=46790,tp_dst=9154,tcp_flags=syn Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1812]: ++ awk -F : '{if( ($1=="10b83248-bb63-4adb-953a-b09ec4a7297d" || $3=="10b83248-bb63-4adb-953a-b09ec4a7297d") && $2~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.2938] device (br-ex): Activation: successful, device activated. Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.758666948Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.758702365Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.810766293Z" level=warning msg="Could not restore sandbox 13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d: failed to Statfs \"/var/run/netns/10b629ec-6fd9-4a7a-bdf3-191b484df0a5\": no such file or directory" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.828082485Z" level=warning msg="Deleting all containers under sandbox 13a3543931af50fb11a2d5bb79aa8a25ddd5f8fc251c224ac455e5c9a9d0605d since it could not be restored" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.846178681Z" level=warning msg="Could not restore sandbox 19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5: failed to Statfs \"/var/run/netns/878b4be7-9dda-4a34-a051-3b53bc09a6dc\": no such file or directory" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.858968588Z" level=warning msg="Deleting all containers under sandbox 19a2bd7297b4ef4216e524d278c8c42f3017a2f792625499abc6e4bc17f836a5 since it could not be restored" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1452]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + PORT_CONNECTION_UUID=10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:15 ip-10-0-136-68 nm-dispatcher[1774]: + '[' 10b83248-bb63-4adb-953a-b09ec4a7297d == '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.4248] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.914789944Z" level=warning msg="Could not restore sandbox e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8: failed to Statfs \"/var/run/netns/b7e04edc-986e-48bf-8822-18763de96831\": no such file or directory" Feb 23 17:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.928145002Z" level=warning msg="Deleting all containers under sandbox e35d890abd5d4b03390bbae5fd11d7232b569cf59687eadfd086ace940a65af8 since it could not be restored" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_set_files br-ex1 phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /run/NetworkManager/system-connections br-ex1 phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/run/NetworkManager/system-connections Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys1 Feb 23 17:50:15 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1817]: ++ nmcli -t -f connection.slave-type conn show 10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.4250] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.956812631Z" level=warning msg="Could not restore sandbox 904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753: failed to Statfs \"/var/run/netns/4683a5dd-6f28-4f95-b6df-7f103be2d0f8\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:15.970113871Z" level=warning msg="Deleting all containers under sandbox 904f3beae60de67c16a5cc959f8d904ee658bf99c4f43531f437e17dffa58753 since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.009395814Z" level=warning msg="Could not restore sandbox 5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a: failed to Statfs \"/var/run/netns/69dd2878-f624-4440-97e8-7ece7e4437b1\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.020205800Z" level=warning msg="Deleting all containers under sandbox 5c3c1df7a428c3e76b587938c521a49edd700de20559f4f3b999cebd60be312a since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.047919708Z" level=warning msg="Could not restore sandbox cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092: failed to Statfs \"/var/run/netns/4383980e-ee96-45fd-8b0e-55d1e1a5408f\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.058496412Z" level=warning msg="Deleting all containers under sandbox cd0b87e63117453924469fa23a8876f62c240ad1b21efb83dea15bc99dfc3092 since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.103579963Z" level=warning msg="Could not restore sandbox 569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450: failed to Statfs \"/var/run/netns/4173af40-9a8f-40dd-9e34-587a95d5903e\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.113707836Z" level=warning msg="Deleting all containers under sandbox 569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450 since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1453]: ++ echo /run/NetworkManager/system-connections/br-ex1 /run/NetworkManager/system-connections/br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex1 /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex1 /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /run/NetworkManager/system-connections/ovs-if-phys1 /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection /run/NetworkManager/system-connections/ovs-port-phys1 /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1818]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.4253] device (ens5): Activation: successful, device activated. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/br-ex1 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /run/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex1 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'OVS configuration successfully reverted' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: OVS configuration successfully reverted Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + reload_profiles_nm '' '' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 0 -eq 0 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + return Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + print_state Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Current device, connection, interface and routing state:' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: Current device, connection, interface and routing state: Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.4444] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.205986936Z" level=warning msg="Could not restore sandbox 11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874: failed to Statfs \"/var/run/netns/032b2b6e-e8bd-477c-be7b-99b07a9ca111\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.214115722Z" level=warning msg="Deleting all containers under sandbox 11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874 since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.223760802Z" level=warning msg="Could not restore sandbox 9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87: failed to Statfs \"/var/run/netns/68c19172-c7c8-4a6b-880c-e79152a16a50\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.231761775Z" level=warning msg="Deleting all containers under sandbox 9ac9106efc7becfc75e53c6f3bbb75c9b475700866d36b78720d1821a3f54d87 since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.257061930Z" level=warning msg="Could not restore sandbox c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab: failed to Statfs \"/var/run/netns/2ca3da91-c233-4b87-902c-88de41d8c9db\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.269465647Z" level=warning msg="Deleting all containers under sandbox c4bc7ffd38dd7829e130588911e2a9b76dd4055df2ffe9584b0f97767744e3ab since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.285742259Z" level=warning msg="Could not restore sandbox c4955e2bb6ed54ae807213d28434394576684187194708bbbf49ca144840f17b: failed to Statfs \"/var/run/netns/652c23fc-94b7-440c-89cf-bc2999359623\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.294709159Z" level=warning msg="Deleting all containers under sandbox c4955e2bb6ed54ae807213d28434394576684187194708bbbf49ca144840f17b since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.321642347Z" level=warning msg="Could not restore sandbox 5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948: failed to Statfs \"/var/run/netns/4d082a50-4c8a-4970-bade-95bf44983bd3\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1455]: + nmcli -g all device Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1823]: ++ nmcli -t -f connection.master conn show 10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.4445] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.332024736Z" level=warning msg="Deleting all containers under sandbox 5fee4f72ddb289d7c724dc134f484551f32fef41b4335e681c821de016db4948 since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1456]: + grep -v unmanaged Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1456]: ens5:ethernet:connected:full:full:/org/freedesktop/NetworkManager/Devices/4:Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:/org/freedesktop/NetworkManager/ActiveConnection/2 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1456]: lo:loopback:connected (externally):limited:limited:/org/freedesktop/NetworkManager/Devices/1:lo:749f9974-6e7f-442a-ac13-546c37530197:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1824]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.4449] device (br-ex): Activation: successful, device activated. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli -g all connection Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + BRIDGE_NAME=br-ex Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + '[' br-ex '!=' br-ex ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + ovs-vsctl list interface ens5 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + declare -A INTERFACES Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + cat /run/ofport_requests.br-ex Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.4713] audit: op="connection-update" uuid="13489ac6-b2bc-4cc7-8035-a6c6f3ece4df" name="br-ex" args="connection.timestamp,connection.autoconnect" pid=1751 uid=0 result="success" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.403470183Z" level=warning msg="Could not restore sandbox aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1: failed to Statfs \"/var/run/netns/f547dd21-4ba2-4f6b-bdf0-89cefc13a119\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1460]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677174612:Thu Feb 23 17\:50\:12 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/2:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/2::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1460]: lo:749f9974-6e7f-442a-ac13-546c37530197:loopback:1677174612:Thu Feb 23 17\:50\:12 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/1:yes:lo:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/lo.nmconnection Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1829]: declare -A INTERFACES=([ens5]="1" ) Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5219] agent-manager: agent[034c4279d6ed69d2,:1.90/nmcli-connect/0]: agent registered Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.409799303Z" level=warning msg="Deleting all containers under sandbox aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.416270348Z" level=warning msg="Could not restore sandbox 422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5: failed to Statfs \"/var/run/netns/67115877-1e45-4be1-ab56-dfcafa2c613e\": no such file or directory" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ip -d address show Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + source /run/ofport_requests.br-ex Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: ++ INTERFACES=(['ens5']='1') Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: ++ declare -A INTERFACES Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + '[' a ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1774]: + declare -p INTERFACES Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5224] device (ens5): state change: ip-check -> deactivating (reason 'new-activation', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.424588149Z" level=warning msg="Deleting all containers under sandbox 422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5 since it could not be restored" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: inet 127.0.0.1/8 scope host lo Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: valid_lft forever preferred_lft forever Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: inet6 ::1/128 scope host Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: valid_lft forever preferred_lft forever Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: 2: ens5: mtu 9001 qdisc mq state UP group default qlen 1000 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 128 maxmtu 9216 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:00:05.0 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: altname enp0s5 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute ens5 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: valid_lft 3600sec preferred_lft 3600sec Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: inet6 fe80::c8e8:d07:4fa0:2dbc/64 scope link tentative noprefixroute Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: valid_lft forever preferred_lft forever Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: 3: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: link/ether ee:b5:a6:bc:8d:2c brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: 4: ovn-k8s-mp0: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: link/ether 2e:5d:2b:01:25:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: 5: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: link/ether 3e:ed:17:7f:65:44 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1859]: Error: Device '' not found. Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5226] manager: NetworkManager state is now CONNECTED_LOCAL Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.440748999Z" level=info msg="cleanup sandbox network" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.442040327Z" level=info msg="Successfully cleaned up network for pod 422c8976e5316750bfcd5cc5a0d60f7a3fcd666801a04dc9000f94c8dd85d5d5" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.442054538Z" level=info msg="cleanup sandbox network" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.442065258Z" level=info msg="Successfully cleaned up network for pod 11e3e168f4f38c1e32d79a7f932ab94223a43d4ad5ead9f127fefc7903f7f874" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.442074691Z" level=info msg="cleanup sandbox network" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.442085338Z" level=info msg="Successfully cleaned up network for pod 569e9fe1389cead46f8e9fec1e6752ebf408d1b8e1da886591a02dbd29bff450" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.442095301Z" level=info msg="cleanup sandbox network" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.444357878Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:50:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:16.444489590Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: inet6 fe80::3ced:17ff:fe7f:6544/64 scope link tentative Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: valid_lft forever preferred_lft forever Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: 6: br-int: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: link/ether 1e:70:f2:fd:64:95 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1464]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + INTERFACE_NAME=ens5 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + OPERATION=pre-up Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' pre-up '!=' pre-up ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5226] device (ens5): detaching ovs interface ens5 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ip route show Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1928]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5227] device (ens5): released from master device ens5 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1465]: default via 10.0.128.1 dev ens5 proto dhcp src 10.0.136.68 metric 100 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1465]: 10.0.128.0/19 dev ens5 proto kernel scope link src 10.0.136.68 metric 100 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1929]: ++ awk -F : '{if($1=="ens5" && $2!~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5231] device (ens5): disconnecting for new activation request. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ip -6 route show Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + INTERFACE_CONNECTION_UUID=7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 == '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5232] audit: op="connection-activate" uuid="7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41" name="ovs-if-phys0" pid=1788 uid=0 result="success" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1466]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1466]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1466]: fe80::/64 dev ens5 proto kernel metric 1024 pref medium Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1935]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5245] device (ens5): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + convert_to_bridge ens5 br-ex phys0 48 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local iface=ens5 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local bridge_name=br-ex Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local port_name=phys0 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local bridge_metric=48 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local ovs_port=ovs-port-br-ex Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local ovs_interface=ovs-if-br-ex Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local default_port_name=ovs-port-phys0 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local bridge_interface_name=ovs-if-phys0 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ens5 = br-ex ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nm_config_changed=1 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -z ens5 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + iface_mac=02:ea:92:f9:d3:f3 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'MAC address found for iface: ens5: 02:ea:92:f9:d3:f3' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: MAC address found for iface: ens5: 02:ea:92:f9:d3:f3 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1934]: ++ nmcli -t -f connection.slave-type conn show 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5359] device (ens5): Activation: starting connection 'ovs-if-phys0' (7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41) Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1469]: ++ ip link show ens5 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + INTERFACE_OVS_SLAVE_TYPE=ovs-port Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' ovs-port '!=' ovs-port ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5361] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1470]: ++ awk '{print $5; exit}' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1940]: ++ nmcli -t -f connection.master conn show 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5384] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + iface_mtu=9001 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ -z 9001 ]] Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'MTU found for iface: ens5: 9001' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: MTU found for iface: ens5: 9001 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1941]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5386] device (ens5): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1473]: ++ awk '/\sens5\s*$/ {print $1}' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + PORT=10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' 10b83248-bb63-4adb-953a-b09ec4a7297d == '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5388] manager: NetworkManager state is now CONNECTING Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1472]: ++ nmcli --fields UUID,DEVICE conn show --active Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1946]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5389] device (ens5): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + old_conn=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ -z eb99b8bd-8e1f-3f41-845b-962703e428f7 ]] Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli connection show br-ex Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists del-br br-ex Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + add_nm_conn type ovs-bridge con-name br-ex conn.interface br-ex 802-3-ethernet.mtu 9001 connection.autoconnect-slaves 1 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli c add type ovs-bridge con-name br-ex conn.interface br-ex 802-3-ethernet.mtu 9001 connection.autoconnect-slaves 1 connection.autoconnect no Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1947]: ++ awk -F : '{if( ($1=="10b83248-bb63-4adb-953a-b09ec4a7297d" || $3=="10b83248-bb63-4adb-953a-b09ec4a7297d") && $2~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5393] device (ens5): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1482]: Connection 'br-ex' (13489ac6-b2bc-4cc7-8035-a6c6f3ece4df) successfully added. Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + PORT_CONNECTION_UUID=10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' 10b83248-bb63-4adb-953a-b09ec4a7297d == '' ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5403] device (ens5): Activation: connection 'ovs-if-phys0' enslaved, continuing activation Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli connection show ovs-port-phys0 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists del-port br-ex ens5 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + add_nm_conn type ovs-port conn.interface ens5 master br-ex con-name ovs-port-phys0 connection.autoconnect-slaves 1 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli c add type ovs-port conn.interface ens5 master br-ex con-name ovs-port-phys0 connection.autoconnect-slaves 1 connection.autoconnect no Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1952]: ++ nmcli -t -f connection.slave-type conn show 10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5407] device (ens5): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1491]: Connection 'ovs-port-phys0' (10b83248-bb63-4adb-953a-b09ec4a7297d) successfully added. Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1953]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5449] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli connection show ovs-port-br-ex Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists del-port br-ex br-ex Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + add_nm_conn type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli c add type ovs-port conn.interface br-ex master br-ex con-name ovs-port-br-ex connection.autoconnect no Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + PORT_OVS_SLAVE_TYPE=ovs-bridge Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' ovs-bridge '!=' ovs-bridge ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.5487] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1500]: Connection 'ovs-port-br-ex' (89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26) successfully added. Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1958]: ++ nmcli -t -f connection.master conn show 10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9086] device (ens5): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + extra_phys_args=() Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1959]: ++ awk -F : '{print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9088] device (ens5): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1504]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + BRIDGE_NAME=br-ex Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' br-ex '!=' br-ex ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + ovs-vsctl list interface ens5 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + CONFIGURATION_FILE=/run/ofport_requests.br-ex Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + declare -A INTERFACES Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' -f /run/ofport_requests.br-ex ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + echo 'Sourcing configuration file '\''/run/ofport_requests.br-ex'\'' with contents:' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: Sourcing configuration file '/run/ofport_requests.br-ex' with contents: Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + cat /run/ofport_requests.br-ex Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9091] manager: NetworkManager state is now CONNECTED_LOCAL Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 802-3-ethernet == vlan ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1964]: declare -A INTERFACES=([ens5]="1" ) Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9094] device (ens5): Activation: successful, device activated. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1508]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + source /run/ofport_requests.br-ex Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: ++ INTERFACES=(['ens5']='1') Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: ++ declare -A INTERFACES Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + '[' a ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + ovs-vsctl set Interface ens5 ofport_request=1 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[1926]: + declare -p INTERFACES Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9285] audit: op="connection-update" uuid="7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41" name="ovs-if-phys0" args="connection.timestamp,connection.autoconnect" pid=1967 uid=0 result="success" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 802-3-ethernet == bond ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2028]: + [[ OVNKubernetes != \O\V\N\K\u\b\e\r\n\e\t\e\s ]] Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2028]: + INTERFACE_NAME=br-ex Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2028]: + OPERATION=pre-up Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2028]: + '[' pre-up '!=' pre-up ']' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9809] agent-manager: agent[a4616e04d4289bb9,:1.110/nmcli-connect/0]: agent registered Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1512]: ++ nmcli --get-values connection.type conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2030]: ++ nmcli -t -f device,type,uuid conn Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9815] device (br-ex): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 802-3-ethernet == team ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + iface_type=802-3-ethernet Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' '' = 0 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + extra_phys_args+=(802-3-ethernet.cloned-mac-address "${iface_mac}") Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli connection show ovs-if-phys0 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists destroy interface ens5 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + add_nm_conn type 802-3-ethernet conn.interface ens5 master ovs-port-phys0 con-name ovs-if-phys0 connection.autoconnect-priority 100 connection.autoconnect-slaves 1 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli c add type 802-3-ethernet conn.interface ens5 master ovs-port-phys0 con-name ovs-if-phys0 connection.autoconnect-priority 100 connection.autoconnect-slaves 1 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 connection.autoconnect no Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2031]: ++ awk -F : '{if($1=="br-ex" && $2!~/^ovs*/) print $NF}' Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9818] device (br-ex): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1521]: Connection 'ovs-if-phys0' (7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41) successfully added. Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2028]: + INTERFACE_CONNECTION_UUID= Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2028]: + '[' '' == '' ']' Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2028]: + exit 0 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9821] device (br-ex): Activation: starting connection 'ovs-if-br-ex' (ed5e11f6-e938-4c92-9d73-c35d5035e9f5) Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1525]: ++ nmcli -g connection.uuid conn show ovs-if-phys0 Feb 23 17:50:16 ip-10-0-136-68 nm-dispatcher[2089]: Error: Device '' not found. Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9822] audit: op="connection-activate" uuid="ed5e11f6-e938-4c92-9d73-c35d5035e9f5" name="ovs-if-br-ex" pid=2006 uid=0 result="success" Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + new_conn=7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9822] device (br-ex): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1529]: ++ nmcli -g connection.uuid conn show ovs-port-br-ex Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9824] manager: NetworkManager state is now CONNECTING Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port_conn=89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + replace_connection_master eb99b8bd-8e1f-3f41-845b-962703e428f7 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local old=eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + local new=7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9826] device (br-ex): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1533]: ++ nmcli -g UUID connection show Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9827] device (br-ex): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9853] device (br-ex): Activation: connection 'ovs-if-br-ex' enslaved, continuing activation Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1537]: ++ nmcli -g connection.master connection show uuid eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9894] device (br-ex): set-hw-addr: set-cloned MAC address to 02:EA:92:F9:D3:F3 (02:EA:92:F9:D3:F3) Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9896] device (br-ex): carrier: link connected Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1541]: ++ nmcli -g connection.master connection show uuid 749f9974-6e7f-442a-ac13-546c37530197 Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9899] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:16 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9971] dhcp4 (br-ex): activation: beginning transaction (timeout in 45 seconds) Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1545]: ++ nmcli -g connection.master connection show uuid 13489ac6-b2bc-4cc7-8035-a6c6f3ece4df Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9985] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:14 ip-10-0-136-68 NetworkManager[1177]: [1677174614.9988] policy: set 'ovs-if-br-ex' (br-ex) as default for IPv4 routing and DNS Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1549]: ++ nmcli -g connection.master connection show uuid 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.0040] ovs: ovs interface "patch-br-int-to-br-ex_ip-10-0-136-68.us-west-2.compute.internal" ((null)) failed: No usable peer 'patch-br-ex_ip-10-0-136-68.us-west-2.compute.internal-to-br-int' exists in 'system' datapath. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 10b83248-bb63-4adb-953a-b09ec4a7297d '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.0045] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1553]: ++ nmcli -g connection.master connection show uuid 89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26 Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.0556] device (br-ex): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' br-ex '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.0558] device (br-ex): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1557]: ++ nmcli -g connection.master connection show uuid 10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.0560] manager: NetworkManager state is now CONNECTED_SITE Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' br-ex '!=' eb99b8bd-8e1f-3f41-845b-962703e428f7 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + replace_connection_master ens5 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local old=ens5 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local new=7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.0563] device (br-ex): Activation: successful, device activated. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1561]: ++ nmcli -g UUID connection show Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.0565] manager: NetworkManager state is now CONNECTED_GLOBAL Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.0794] audit: op="connection-update" uuid="ed5e11f6-e938-4c92-9d73-c35d5035e9f5" name="ovs-if-br-ex" args="connection.timestamp,connection.autoconnect" pid=2035 uid=0 result="success" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1565]: ++ nmcli -g connection.master connection show uuid eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677174615.1701] audit: op="connections-reload" pid=2129 uid=0 result="success" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' '!=' ens5 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: ovs-configuration.service: Deactivated successfully. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1569]: ++ nmcli -g connection.master connection show uuid 749f9974-6e7f-442a-ac13-546c37530197 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Finished Configures OVS with proper host networking configuration. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' '!=' ens5 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: ovs-configuration.service: Consumed 1.224s CPU time. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1573]: ++ nmcli -g connection.master connection show uuid 13489ac6-b2bc-4cc7-8035-a6c6f3ece4df Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Starting Wait for a non-localhost hostname... Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' '!=' ens5 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Finished Wait for a non-localhost hostname. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1577]: ++ nmcli -g connection.master connection show uuid 7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Reached target Network is Online. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 10b83248-bb63-4adb-953a-b09ec4a7297d '!=' ens5 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Login and scanning of iSCSI devices was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/var/lib/iscsi/nodes). Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1581]: ++ nmcli -g connection.master connection show uuid 89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Reached target Preparation for Remote File Systems. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' br-ex '!=' ens5 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn_uuid in $(nmcli -g UUID connection show) Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Reached target Remote Encrypted Volumes. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1585]: ++ nmcli -g connection.master connection show uuid 10b83248-bb63-4adb-953a-b09ec4a7297d Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Reached target Remote File Systems. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' br-ex '!=' ens5 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + continue Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Starting Dynamically sets the system reserved for the kubelet... Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1589]: ++ nmcli -g ipv4.method conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Machine Config Daemon Pull was skipped because of an unmet condition check (ConditionPathExists=/etc/ignition-machine-config-encapsulated.json). Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + ipv4_method=auto Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Machine Config Daemon Firstboot was skipped because of an unmet condition check (ConditionPathExists=/etc/ignition-machine-config-encapsulated.json). Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1593]: ++ nmcli -g ipv6.method conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Starting Notify NFS peers of a restart... Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + ipv6_method=auto Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 9001 -lt 1280 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli connection show ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists destroy interface br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' auto = manual ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' auto = manual ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + extra_if_brex_args= Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Starting NFS status monitor for NFSv2/3 locking.... Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1603]: ++ ip -j a show dev ens5 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1604]: ++ jq '.[0].addr_info | map(. | select(.family == "inet")) | length' Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Starting Permit User Sessions... Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + num_ipv4_addrs=1 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 1 -gt 0 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + extra_if_brex_args+='ipv4.may-fail no ' Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Finished Dynamically sets the system reserved for the kubelet. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1606]: ++ ip -j a show dev ens5 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Started Notify NFS peers of a restart. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1607]: ++ jq '.[0].addr_info | map(. | select(.family == "inet6" and .scope != "link")) | length' Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Finished Permit User Sessions. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + num_ip6_addrs=0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 0 -gt 0 ']' Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: CoreOS Live ISO virtio success was skipped because of an unmet condition check (ConditionPathExists=/dev/virtio-ports/coreos.liveiso-success). Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1608]: ++ nmcli --get-values ipv4.dhcp-client-id conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)... Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + dhcp_client_id= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -n '' ']' Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Started Getty on tty1. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1612]: ++ nmcli --get-values ipv6.dhcp-duid conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Starting RPC Bind... Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + dhcp6_client_id= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -n '' ']' Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Started Serial Getty on ttyS0. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1616]: ++ nmcli --get-values ipv6.addr-gen-mode conn show eb99b8bd-8e1f-3f41-845b-962703e428f7 Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Reached target Login Prompts. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + ipv6_addr_gen_mode=default Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -n default ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + extra_if_brex_args+='ipv6.addr-gen-mode default ' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + add_nm_conn type ovs-interface slave-type ovs-port conn.interface br-ex master 89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26 con-name ovs-if-br-ex 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 ipv4.method auto ipv4.route-metric 48 ipv6.method auto ipv6.route-metric 48 ipv4.may-fail no ipv6.addr-gen-mode default Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli c add type ovs-interface slave-type ovs-port conn.interface br-ex master 89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26 con-name ovs-if-br-ex 802-3-ethernet.mtu 9001 802-3-ethernet.cloned-mac-address 02:ea:92:f9:d3:f3 ipv4.method auto ipv4.route-metric 48 ipv6.method auto ipv6.route-metric 48 ipv4.may-fail no ipv6.addr-gen-mode default connection.autoconnect no Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Started RPC Bind. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1620]: Connection 'ovs-if-br-ex' (ed5e11f6-e938-4c92-9d73-c35d5035e9f5) successfully added. Feb 23 17:50:15 ip-10-0-136-68 systemd[1]: Started NFS status monitor for NFSv2/3 locking.. Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + configure_driver_options ens5 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + intf=ens5 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' -f /sys/class/net/ens5/device/uevent ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1625]: ++ cat /sys/class/net/ens5/device/uevent Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1626]: ++ grep DRIVER Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1627]: ++ awk -F = '{print $2}' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + driver=ena Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Driver name is' ena Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Driver name is ena Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ena = vmxnet3 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' -f /etc/ovnk/extra_bridge ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1628]: + nmcli connection show br-ex1 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1628]: + nmcli connection show ovs-if-phys1 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs-vsctl --timeout=30 --if-exists del-br br0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + connections=(br-ex ovs-if-phys0) Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1638]: ++ nmcli -g NAME c Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + IFS= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + read -r connection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ Wired connection 1 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + IFS= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + read -r connection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ lo == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + IFS= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + read -r connection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + IFS= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + read -r connection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ ovs-if-br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + IFS= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + read -r connection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ ovs-if-phys0 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + IFS= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + read -r connection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ ovs-port-br-ex == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + IFS= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + read -r connection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ ovs-port-phys0 == *\-\s\l\a\v\e\-\o\v\s\-\c\l\o\n\e ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + IFS= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + read -r connection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + connections+=(ovs-if-br-ex) Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/ovnk/extra_bridge ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + activate_nm_connections br-ex ovs-if-phys0 ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + connections=('br-ex' 'ovs-if-phys0' 'ovs-if-br-ex') Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn in "${connections[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1643]: ++ nmcli -g connection.slave-type connection show br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local slave_type= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' = team ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' = bond ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn in "${connections[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1647]: ++ nmcli -g connection.slave-type connection show ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local slave_type=ovs-port Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ovs-port = team ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ovs-port = bond ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn in "${connections[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1651]: ++ nmcli -g connection.slave-type connection show ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local slave_type=ovs-port Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ovs-port = team ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ovs-port = bond ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + declare -A master_interfaces Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn in "${connections[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1655]: ++ nmcli -g connection.slave-type connection show br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local slave_type= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local is_slave=false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' = team ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' = bond ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local master_interface Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1659]: ++ nmcli -g GENERAL.STATE conn show br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local active_state= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' == activated ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for i in {1..10} Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Attempt 1 to bring up connection br-ex' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Attempt 1 to bring up connection br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli conn up br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1663]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3) Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + s=0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + break Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 0 -eq 0 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Brought up connection br-ex successfully' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Brought up connection br-ex successfully Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli c mod br-ex connection.autoconnect yes Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn in "${connections[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1770]: ++ nmcli -g connection.slave-type connection show ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local slave_type=ovs-port Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local is_slave=false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ovs-port = team ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ovs-port = bond ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local master_interface Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1778]: ++ nmcli -g GENERAL.STATE conn show ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local active_state=activating Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' activating == activated ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for i in {1..10} Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Attempt 1 to bring up connection ovs-if-phys0' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Attempt 1 to bring up connection ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli conn up ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1788]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + s=0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + break Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 0 -eq 0 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Brought up connection ovs-if-phys0 successfully' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Brought up connection ovs-if-phys0 successfully Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli c mod ovs-if-phys0 connection.autoconnect yes Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for conn in "${connections[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1975]: ++ nmcli -g connection.slave-type connection show ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local slave_type=ovs-port Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local is_slave=false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ovs-port = team ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' ovs-port = bond ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local master_interface Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1993]: ++ nmcli -g GENERAL.STATE conn show ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local active_state= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '' == activated ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for i in {1..10} Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Attempt 1 to bring up connection ovs-if-br-ex' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Attempt 1 to bring up connection ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli conn up ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2006]: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8) Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + s=0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + break Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 0 -eq 0 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Brought up connection ovs-if-br-ex successfully' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Brought up connection ovs-if-br-ex successfully Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + false Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli c mod ovs-if-br-ex connection.autoconnect yes Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + try_to_bind_ipv6_address Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + retries=60 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ 60 -eq 0 ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2050]: ++ ip -6 -j addr Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2051]: ++ jq -r 'first(.[] | select(.ifname=="br-ex") | .addr_info[] | select(.scope=="global") | .local)' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + ip= Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ '' == '' ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'No ipv6 ip to bind was found' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: No ipv6 ip to bind was found Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + break Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + [[ 60 -eq 0 ]] Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + set_nm_conn_files Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' /etc/NetworkManager/system-connections '!=' /run/NetworkManager/system-connections ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_conf_files br-ex phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2064]: ++ echo /etc/NetworkManager/system-connections/br-ex /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0 /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0 /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + copy_nm_conn_files /run/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + local dst_path=/run/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2065]: ++ dirname /etc/NetworkManager/system-connections/br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2066]: ++ basename /etc/NetworkManager/system-connections/br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping br-ex since it does not exist at source' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping br-ex since it does not exist at source Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2067]: ++ dirname /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2068]: ++ basename /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' -f /run/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Copying configuration br-ex.nmconnection' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Copying configuration br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + cp /etc/NetworkManager/system-connections/br-ex.nmconnection /run/NetworkManager/system-connections/br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2072]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2074]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-if-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-if-br-ex since it does not exist at source' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-if-br-ex since it does not exist at source Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2076]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2078]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-if-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Copying configuration ovs-if-br-ex.nmconnection' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Copying configuration ovs-if-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + cp /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2080]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2082]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-port-br-ex Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-port-br-ex since it does not exist at source' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-port-br-ex since it does not exist at source Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2084]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2086]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-port-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Copying configuration ovs-port-br-ex.nmconnection' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Copying configuration ovs-port-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + cp /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2090]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2091]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-if-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-if-phys0 since it does not exist at source' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-if-phys0 since it does not exist at source Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2092]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2093]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-if-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Copying configuration ovs-if-phys0.nmconnection' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Copying configuration ovs-if-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + cp /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /run/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2095]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2098]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-port-phys0 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-port-phys0 since it does not exist at source' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-port-phys0 since it does not exist at source Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2100]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[2101]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-port-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' '!' -f /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Copying configuration ovs-port-phys0.nmconnection' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Copying configuration ovs-port-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + cp /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection /run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + rm -f /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/br-ex.nmconnection' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Removed nmconnection file /etc/NetworkManager/system-connections/br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nm_config_changed=1 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + rm -f /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nm_config_changed=1 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection ']' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + rm -f /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection' Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + nm_config_changed=1 Feb 23 17:50:17 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + rm -f /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + nm_config_changed=1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + rm -f /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Removed nmconnection file /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + nm_config_changed=1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_conf_files br-ex1 phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + update_nm_conn_files_base /etc/NetworkManager/system-connections br-ex1 phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + base_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_name=br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + port_name=phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_port=ovs-port-br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + ovs_interface=ovs-if-br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + default_port_name=ovs-port-phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + bridge_interface_name=ovs-if-phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES=($(echo "${base_path}"/{"$bridge_name","$ovs_interface","$ovs_port","$bridge_interface_name","$default_port_name"}{,.nmconnection})) Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2108]: ++ echo /etc/NetworkManager/system-connections/br-ex1 /etc/NetworkManager/system-connections/br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex1 /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex1 /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys1 /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys1 /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -s nullglob Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + MANAGED_NM_CONN_FILES+=(${base_path}/*${MANAGED_NM_CONN_SUFFIX}.nmconnection ${base_path}/*${MANAGED_NM_CONN_SUFFIX}) Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + shopt -u nullglob Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + copy_nm_conn_files /run/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + local dst_path=/run/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2109]: ++ dirname /etc/NetworkManager/system-connections/br-ex1 Feb 23 17:50:18 ip-10-0-136-68 chronyd[1040]: Selected source 169.254.169.123 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2110]: ++ basename /etc/NetworkManager/system-connections/br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping br-ex1 since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping br-ex1 since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2111]: ++ dirname /etc/NetworkManager/system-connections/br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2112]: ++ basename /etc/NetworkManager/system-connections/br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping br-ex1.nmconnection since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping br-ex1.nmconnection since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2113]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2114]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-if-br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-if-br-ex1 since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-if-br-ex1 since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2115]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2116]: ++ basename /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-if-br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-if-br-ex1.nmconnection since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-if-br-ex1.nmconnection since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2117]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2118]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-port-br-ex1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-port-br-ex1 since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-port-br-ex1 since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2119]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2120]: ++ basename /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-port-br-ex1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-port-br-ex1.nmconnection since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-port-br-ex1.nmconnection since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2121]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2122]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-if-phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-if-phys1 since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-if-phys1 since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2123]: ++ dirname /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2124]: ++ basename /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-if-phys1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-if-phys1.nmconnection since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-if-phys1.nmconnection since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2125]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2126]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-port-phys1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-port-phys1 since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-port-phys1 since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for src in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2127]: ++ dirname /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + src_path=/etc/NetworkManager/system-connections Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2128]: ++ basename /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + file=ovs-port-phys1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Skipping ovs-port-phys1.nmconnection since it does not exist at source' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Skipping ovs-port-phys1.nmconnection since it does not exist at source Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + rm_nm_conn_files Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/br-ex1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-br-ex1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-br-ex1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-if-phys1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + for file in "${MANAGED_NM_CONN_FILES[@]}" Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' -f /etc/NetworkManager/system-connections/ovs-port-phys1.nmconnection ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli connection reload Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + handle_exit Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + e=0 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + '[' 0 -eq 0 ']' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + print_state Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + echo 'Current device, connection, interface and routing state:' Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: Current device, connection, interface and routing state: Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2133]: + nmcli -g all device Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2134]: + grep -v unmanaged Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2134]: br-ex:ovs-interface:connected:full:full:/org/freedesktop/NetworkManager/Devices/23:ovs-if-br-ex:ed5e11f6-e938-4c92-9d73-c35d5035e9f5:/org/freedesktop/NetworkManager/ActiveConnection/8 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2134]: lo:loopback:connected (externally):limited:limited:/org/freedesktop/NetworkManager/Devices/1:lo:749f9974-6e7f-442a-ac13-546c37530197:/org/freedesktop/NetworkManager/ActiveConnection/1 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2134]: ens5:ethernet:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/4:ovs-if-phys0:7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41:/org/freedesktop/NetworkManager/ActiveConnection/7 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2134]: br-ex:ovs-bridge:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/20:br-ex:13489ac6-b2bc-4cc7-8035-a6c6f3ece4df:/org/freedesktop/NetworkManager/ActiveConnection/3 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2134]: br-ex:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/22:ovs-port-br-ex:89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26:/org/freedesktop/NetworkManager/ActiveConnection/4 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2134]: ens5:ovs-port:connected:limited:limited:/org/freedesktop/NetworkManager/Devices/21:ovs-port-phys0:10b83248-bb63-4adb-953a-b09ec4a7297d:/org/freedesktop/NetworkManager/ActiveConnection/5 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + nmcli -g all connection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2138]: ovs-if-br-ex:ed5e11f6-e938-4c92-9d73-c35d5035e9f5:ovs-interface:1677174615:Thu Feb 23 17\:50\:15 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/7:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/8:ovs-port:/run/NetworkManager/system-connections/ovs-if-br-ex.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2138]: lo:749f9974-6e7f-442a-ac13-546c37530197:loopback:1677174612:Thu Feb 23 17\:50\:12 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/1:yes:lo:activated:/org/freedesktop/NetworkManager/ActiveConnection/1::/run/NetworkManager/system-connections/lo.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2138]: br-ex:13489ac6-b2bc-4cc7-8035-a6c6f3ece4df:ovs-bridge:1677174614:Thu Feb 23 17\:50\:14 2023:yes:0:no:/org/freedesktop/NetworkManager/Settings/3:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/3::/run/NetworkManager/system-connections/br-ex.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2138]: ovs-if-phys0:7d3a1e16-f5ff-43fb-83ee-ec795d0a7e41:802-3-ethernet:1677174614:Thu Feb 23 17\:50\:14 2023:yes:100:no:/org/freedesktop/NetworkManager/Settings/6:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/7:ovs-port:/run/NetworkManager/system-connections/ovs-if-phys0.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2138]: ovs-port-br-ex:89e5f9b7-0e1b-4e0e-b6ba-2b58e2193d26:ovs-port:1677174614:Thu Feb 23 17\:50\:14 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/5:yes:br-ex:activated:/org/freedesktop/NetworkManager/ActiveConnection/4:ovs-bridge:/run/NetworkManager/system-connections/ovs-port-br-ex.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2138]: ovs-port-phys0:10b83248-bb63-4adb-953a-b09ec4a7297d:ovs-port:1677174614:Thu Feb 23 17\:50\:14 2023:no:0:no:/org/freedesktop/NetworkManager/Settings/4:yes:ens5:activated:/org/freedesktop/NetworkManager/ActiveConnection/5:ovs-bridge:/run/NetworkManager/system-connections/ovs-port-phys0.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2138]: Wired connection 1:eb99b8bd-8e1f-3f41-845b-962703e428f7:802-3-ethernet:1677174614:Thu Feb 23 17\:50\:14 2023:yes:-999:no:/org/freedesktop/NetworkManager/Settings/2:no:::::/run/NetworkManager/system-connections/Wired connection 1.nmconnection Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + ip -d address show Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: inet 127.0.0.1/8 scope host lo Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: valid_lft forever preferred_lft forever Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: inet6 ::1/128 scope host Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: valid_lft forever preferred_lft forever Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: 2: ens5: mtu 9001 qdisc mq master ovs-system state UP group default qlen 1000 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 128 maxmtu 9216 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: openvswitch_slave numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:00:05.0 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: altname enp0s5 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: 3: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: link/ether ee:b5:a6:bc:8d:2c brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: 4: ovn-k8s-mp0: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: link/ether 2e:5d:2b:01:25:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: 5: genev_sys_6081: mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: link/ether 3e:ed:17:7f:65:44 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65465 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: geneve external id 0 ttl auto dstport 6081 udp6zerocsumrx Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: inet6 fe80::3ced:17ff:fe7f:6544/64 scope link Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: valid_lft forever preferred_lft forever Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: 6: br-int: mtu 8901 qdisc noop state DOWN group default qlen 1000 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: link/ether 1e:70:f2:fd:64:95 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: 7: br-ex: mtu 9001 qdisc noqueue state UNKNOWN group default qlen 1000 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: link/ether 02:ea:92:f9:d3:f3 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: inet 10.0.136.68/19 brd 10.0.159.255 scope global dynamic noprefixroute br-ex Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: valid_lft 3600sec preferred_lft 3600sec Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: inet6 fe80::8cc8:b4d5:3c14:7fd7/64 scope link tentative noprefixroute Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2142]: valid_lft forever preferred_lft forever Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + ip route show Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2143]: default via 10.0.128.1 dev br-ex proto dhcp src 10.0.136.68 metric 48 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2143]: 10.0.128.0/19 dev br-ex proto kernel scope link src 10.0.136.68 metric 48 Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + ip -6 route show Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2144]: ::1 dev lo proto kernel metric 256 pref medium Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2144]: fe80::/64 dev genev_sys_6081 proto kernel metric 256 pref medium Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[2144]: fe80::/64 dev br-ex proto kernel metric 1024 pref medium Feb 23 17:50:18 ip-10-0-136-68 configure-ovs.sh[1307]: + exit 0 Feb 23 17:50:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:18.932751658Z" level=info msg="Starting seccomp notifier watcher" Feb 23 17:50:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:18.932824952Z" level=info msg="Create NRI interface" Feb 23 17:50:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:18.932833787Z" level=info msg="NRI interface is disabled in the configuration." Feb 23 17:50:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:18.932848047Z" level=info msg="Serving metrics on :9537 via HTTP" Feb 23 17:50:18 ip-10-0-136-68 systemd[1]: Started Container Runtime Interface for OCI (CRI-O). Feb 23 17:50:18 ip-10-0-136-68 systemd[1]: Starting Kubernetes Kubelet... Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.672904 2199 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --provider-id has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:19.676512 2199 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676595 2199 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676612 2199 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676623 2199 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676630 2199 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676636 2199 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676641 2199 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676647 2199 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676653 2199 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676656 2199 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676659 2199 flags.go:64] FLAG: --azure-container-registry-config="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676663 2199 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676666 2199 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676669 2199 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676673 2199 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676675 2199 flags.go:64] FLAG: --cgroup-root="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676678 2199 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676681 2199 flags.go:64] FLAG: --client-ca-file="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676684 2199 flags.go:64] FLAG: --cloud-config="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676686 2199 flags.go:64] FLAG: --cloud-provider="aws" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676689 2199 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676693 2199 flags.go:64] FLAG: --cluster-domain="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676696 2199 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676699 2199 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676703 2199 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676708 2199 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676713 2199 flags.go:64] FLAG: --container-runtime="remote" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676716 2199 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676719 2199 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676723 2199 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676726 2199 flags.go:64] FLAG: --contention-profiling="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676729 2199 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676732 2199 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676735 2199 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676737 2199 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676741 2199 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676745 2199 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676748 2199 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676751 2199 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676755 2199 flags.go:64] FLAG: --enable-server="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676758 2199 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676763 2199 flags.go:64] FLAG: --event-burst="10" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676766 2199 flags.go:64] FLAG: --event-qps="5" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676769 2199 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676772 2199 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676774 2199 flags.go:64] FLAG: --eviction-hard="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676781 2199 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676785 2199 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676788 2199 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676793 2199 flags.go:64] FLAG: --eviction-soft="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676796 2199 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676799 2199 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676802 2199 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676805 2199 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676808 2199 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676811 2199 flags.go:64] FLAG: --feature-gates="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676815 2199 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676818 2199 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676821 2199 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676824 2199 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676827 2199 flags.go:64] FLAG: --healthz-port="10248" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676831 2199 flags.go:64] FLAG: --help="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676834 2199 flags.go:64] FLAG: --hostname-override="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676838 2199 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676841 2199 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676846 2199 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676849 2199 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676851 2199 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676854 2199 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676857 2199 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676859 2199 flags.go:64] FLAG: --iptables-drop-bit="15" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676862 2199 flags.go:64] FLAG: --iptables-masquerade-bit="14" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676865 2199 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676868 2199 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676871 2199 flags.go:64] FLAG: --kube-api-burst="10" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676874 2199 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676878 2199 flags.go:64] FLAG: --kube-api-qps="5" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676880 2199 flags.go:64] FLAG: --kube-reserved="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676883 2199 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676886 2199 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676889 2199 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676892 2199 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676895 2199 flags.go:64] FLAG: --lock-file="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676897 2199 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676900 2199 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676905 2199 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676909 2199 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676912 2199 flags.go:64] FLAG: --logging-format="text" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676915 2199 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676918 2199 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676920 2199 flags.go:64] FLAG: --manifest-url="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676923 2199 flags.go:64] FLAG: --manifest-url-header="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676928 2199 flags.go:64] FLAG: --master-service-namespace="default" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676931 2199 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676933 2199 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676937 2199 flags.go:64] FLAG: --max-pods="110" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676939 2199 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676942 2199 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676945 2199 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676947 2199 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676952 2199 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676954 2199 flags.go:64] FLAG: --node-ip="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676957 2199 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676967 2199 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676969 2199 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676973 2199 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676976 2199 flags.go:64] FLAG: --pod-cidr="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676978 2199 flags.go:64] FLAG: --pod-infra-container-image="registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676984 2199 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676987 2199 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676990 2199 flags.go:64] FLAG: --pods-per-core="0" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676992 2199 flags.go:64] FLAG: --port="10250" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.676998 2199 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677001 2199 flags.go:64] FLAG: --provider-id="aws:///us-west-2a/i-09b04ed55ff55b4f7" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677004 2199 flags.go:64] FLAG: --qos-reserved="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677006 2199 flags.go:64] FLAG: --read-only-port="10255" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677010 2199 flags.go:64] FLAG: --register-node="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677012 2199 flags.go:64] FLAG: --register-schedulable="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677015 2199 flags.go:64] FLAG: --register-with-taints="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677018 2199 flags.go:64] FLAG: --registry-burst="10" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677021 2199 flags.go:64] FLAG: --registry-qps="5" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677024 2199 flags.go:64] FLAG: --reserved-cpus="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677027 2199 flags.go:64] FLAG: --reserved-memory="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677031 2199 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677034 2199 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677037 2199 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677040 2199 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677043 2199 flags.go:64] FLAG: --runonce="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677047 2199 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677050 2199 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677053 2199 flags.go:64] FLAG: --seccomp-default="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677057 2199 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677059 2199 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677062 2199 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677065 2199 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677068 2199 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677071 2199 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677074 2199 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677076 2199 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677079 2199 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677081 2199 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677084 2199 flags.go:64] FLAG: --system-cgroups="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677089 2199 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677095 2199 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677099 2199 flags.go:64] FLAG: --tls-cert-file="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677102 2199 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677108 2199 flags.go:64] FLAG: --tls-min-version="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677110 2199 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677113 2199 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677116 2199 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677118 2199 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677121 2199 flags.go:64] FLAG: --v="2" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677125 2199 flags.go:64] FLAG: --version="false" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677129 2199 flags.go:64] FLAG: --vmodule="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677133 2199 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677136 2199 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:19.677218 2199 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.677223 2199 feature_gate.go:250] feature gates: &{map[APIPriorityAndFairness:true DownwardAPIHugePages:true RetroactiveDefaultStorageClass:false RotateKubeletServerCertificate:true]} Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.682470 2199 server.go:412] "Kubelet version" kubeletVersion="v1.26.0+919a59b" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.682487 2199 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:19.682520 2199 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.682527 2199 feature_gate.go:250] feature gates: &{map[APIPriorityAndFairness:true DownwardAPIHugePages:true RetroactiveDefaultStorageClass:false RotateKubeletServerCertificate:true]} Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:19.686229 2199 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.686439 2199 feature_gate.go:250] feature gates: &{map[APIPriorityAndFairness:true DownwardAPIHugePages:true RetroactiveDefaultStorageClass:false RotateKubeletServerCertificate:true]} Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:19.688080 2199 plugins.go:131] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.693766 2199 aws.go:1226] Get AWS region from metadata client Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.693870 2199 aws.go:1269] Zone not specified in configuration file; querying AWS metadata service Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.702495 2199 aws.go:1309] Building AWS cloudprovider Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.850176 2199 tags.go:80] AWS cloud filtering on ClusterID: mnguyen-rt-wnslw Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.850205 2199 server.go:554] "Successfully initialized cloud provider" cloudProvider="aws" cloudConfigFile="" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.850222 2199 server.go:1004] "Cloud provider determined current node" nodeName="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.850229 2199 server.go:836] "Client rotation is on, will bootstrap in background" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.854579 2199 bootstrap.go:84] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.855551 2199 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.855775 2199 server.go:893] "Starting client certificate rotation" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.856725 2199 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.856861 2199 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2023-02-24 15:23:10 +0000 UTC, rotation deadline is 2023-02-24 09:51:11.076379696 +0000 UTC Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.856899 2199 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Waiting 16h0m51.219483101s for next certificate rotation Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.878652 2199 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.878826 2199 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.885693 2199 manager.go:163] cAdvisor running in container: "/system.slice/kubelet.service" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.889145 2199 fs.go:133] Filesystem UUIDs: map[54e5ab65-ff73-4a26-8c44-2a9765abf45f:/dev/nvme0n1p3 A94B-67F7:/dev/nvme0n1p2 c83680a9-dcc4-4413-a0a5-4681b35c650a:/dev/nvme0n1p4] Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.889164 2199 fs.go:134] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:23 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:25 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:26 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:44 fsType:tmpfs blockSize:0}] Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.889193 2199 nvidia.go:55] NVIDIA GPU metrics disabled Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.904942 2199 manager.go:212] Machine: {Timestamp:2023-02-23 17:50:19.904724772 +0000 UTC m=+0.795343801 CPUVendorID:GenuineIntel NumCores:4 NumPhysicalCores:2 NumSockets:1 CpuFrequency:2899998 MemoryCapacity:16493998080 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2d456b0a3e28d0eb2f198315e90643 SystemUUID:ec2d456b-0a3e-28d0-eb2f-198315e90643 BootID:7e69ac5f-095f-4a9b-b24c-6e8366d55bca Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:23 Capacity:8246996992 Type:vfs Inodes:2013427 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:25 Capacity:3298799616 Type:vfs Inodes:819200 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:26 Capacity:4194304 Type:vfs Inodes:1024 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128300593152 Type:vfs Inodes:62651840 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:44 Capacity:8247001088 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:ea:92:f9:d3:f3 Speed:0 Mtu:9001} {Name:br-int MacAddress:1e:70:f2:fd:64:95 Speed:0 Mtu:8901} {Name:ens5 MacAddress:02:ea:92:f9:d3:f3 Speed:0 Mtu:9001} {Name:genev_sys_6081 MacAddress:3e:ed:17:7f:65:44 Speed:0 Mtu:65000} {Name:ovn-k8s-mp0 MacAddress:2e:5d:2b:01:25:48 Speed:0 Mtu:8901} {Name:ovs-system MacAddress:ee:b5:a6:bc:8d:2c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:16493998080 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 2] Caches:[{Id:0 Size:49152 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:1 Threads:[1 3] Caches:[{Id:1 Size:49152 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1310720 Type:Unified Level:2}] UncoreCaches:[] SocketID:0}] Caches:[{Id:0 Size:56623104 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.905034 2199 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.905190 2199 manager.go:228] Version: {KernelVersion:5.14.0-266.rt14.266.el9.x86_64 ContainerOsVersion:CentOS Stream CoreOS 413.92.202302171914-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.906965 2199 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.907030 2199 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/system.slice/crio.service SystemCgroupsName:/system.slice KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[cpu:{i:{value:500 scale:-3} d:{Dec:} s:500m Format:DecimalSI} ephemeral-storage:{i:{value:1073741824 scale:0} d:{Dec:} s:1Gi Format:BinarySI} memory:{i:{value:1073741824 scale:0} d:{Dec:} s:1Gi Format:BinarySI}] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:4096 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.907045 2199 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.907054 2199 container_manager_linux.go:308] "Creating device plugin manager" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.908075 2199 manager.go:125] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.908833 2199 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.908935 2199 state_mem.go:36] "Initialized new in-memory state store" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.908997 2199 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.925706 2199 remote_runtime.go:121] "Validated CRI v1 runtime API" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.925732 2199 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.976037 2199 remote_image.go:97] "Validated CRI v1 image API" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.976070 2199 server.go:1004] "Cloud provider determined current node" nodeName="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.976080 2199 server.go:1147] "Using root directory" path="/var/lib/kubelet" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.980568 2199 kubelet.go:407] "Attempting to sync node with API server" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.980589 2199 kubelet.go:295] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.980616 2199 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.980627 2199 kubelet.go:306] "Adding apiserver pod source" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.980636 2199 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.983630 2199 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="cri-o" version="1.26.1-4.rhaos4.13.gita78722c.el9" apiVersion="v1" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.987264 2199 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.993783 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.993799 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.993806 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.993819 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.993826 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.993833 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.993839 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994727 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994740 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994747 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994754 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994762 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994790 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994798 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994804 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994811 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994817 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994824 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.994831 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.996178 2199 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.996382 2199 server.go:1186] "Started kubelet" Feb 23 17:50:19 ip-10-0-136-68 systemd[1]: Started Kubernetes Kubelet. Feb 23 17:50:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:50:19.996993 2199 kubelet.go:1399] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 23 17:50:19 ip-10-0-136-68 systemd[1]: Reached target Multi-User System. Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.999103 2199 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:19.999646 2199 server.go:451] "Adding debug handlers to kubelet server" Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Reached target Graphical Interface. Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.002126 2199 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.002152 2199 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.003200 2199 certificate_manager.go:270] kubernetes.io/kubelet-serving: Certificate expiration is 2023-02-24 15:23:10 +0000 UTC, rotation deadline is 2023-02-24 12:48:07.019019923 +0000 UTC Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.003229 2199 certificate_manager.go:270] kubernetes.io/kubelet-serving: Waiting 18h57m47.015794676s for next certificate rotation Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Afterburn (Check In) was skipped because no trigger condition checks were met. Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Afterburn (Firstboot Check In) was skipped because of an unmet condition check (ConditionFirstBoot=yes). Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.004595 2199 volume_manager.go:291] "The desired_state_of_world populator starts" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.004770 2199 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.004858 2199 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 23 17:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:20.005277808Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=fdfc8b8f-d974-4e84-8a50-05669c343165 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:20.005688286Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3 not found" id=fdfc8b8f-d974-4e84-8a50-05669c343165 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Starting Record Runlevel Change in UTMP... Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.010364 2199 factory.go:153] Registering CRI-O factory Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.010376 2199 factory.go:55] Registering systemd factory Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.010456 2199 factory.go:103] Registering Raw factory Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.010934 2199 manager.go:1201] Started watching for new ooms in manager Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.011824 2199 manager.go:302] Starting recovery of all containers Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Finished Record Runlevel Change in UTMP. Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Startup finished in 976ms (kernel) + 2.585s (initrd) + 11.353s (userspace) = 14.915s. Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.049090 2199 manager.go:307] Recovery completed Feb 23 17:50:20 ip-10-0-136-68 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.075680 2199 cpu_manager.go:215] "Starting CPU manager" policy="none" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.075701 2199 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.075716 2199 state_mem.go:36] "Initialized new in-memory state store" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.078895 2199 policy_none.go:49] "None policy: Start" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.079381 2199 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.079402 2199 state_mem.go:35] "Initializing new in-memory state store" Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods.slice. Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable.slice. Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.104610 2199 kubelet_node_status.go:376] "Setting node annotation to enable volume controller attach/detach" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.104659 2199 kubelet_node_status.go:424] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.104677 2199 kubelet_node_status.go:426] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m6i.xlarge" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.104691 2199 kubelet_node_status.go:437] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.104701 2199 kubelet_node_status.go:439] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="us-west-2a" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.104729 2199 kubelet_node_status.go:443] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-west-2" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.104739 2199 kubelet_node_status.go:445] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-west-2" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.106350 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.106374 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.106389 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.106421 2199 kubelet_node_status.go:72] "Attempting to register node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-besteffort.slice. Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.112231 2199 manager.go:281] "Starting Device Plugin manager" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.116931 2199 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.117061 2199 server.go:79] "Starting device plugin registration server" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.117603 2199 plugin_watcher.go:52] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.117731 2199 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.117744 2199 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.125849 2199 kubelet_node_status.go:110] "Node was previously registered" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.125991 2199 kubelet_node_status.go:75] "Successfully registered node" node="ip-10-0-136-68.us-west-2.compute.internal" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.132884 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.133004 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.133076 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.133159 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeNotReady" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.133279 2199 setters.go:548] "Node became not ready" node="ip-10-0-136-68.us-west-2.compute.internal" condition={Type:Ready Status:False LastHeartbeatTime:2023-02-23 17:50:20.13313867 +0000 UTC m=+1.023757698 LastTransitionTime:2023-02-23 17:50:20.13313867 +0000 UTC m=+1.023757698 Reason:KubeletNotReady Message:PLEG is not healthy: pleg has yet to be successful} Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.133351 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeNotSchedulable" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.154474 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientMemory" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.154499 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasNoDiskPressure" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.154511 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeHasSufficientPID" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.154528 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeNotReady" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.154551 2199 setters.go:548] "Node became not ready" node="ip-10-0-136-68.us-west-2.compute.internal" condition={Type:Ready Status:False LastHeartbeatTime:2023-02-23 17:50:20.15451713 +0000 UTC m=+1.045136149 LastTransitionTime:2023-02-23 17:50:20.15451713 +0000 UTC m=+1.045136149 Reason:KubeletNotReady Message:PLEG is not healthy: pleg has yet to be successful} Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.187504 2199 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.215930 2199 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.215947 2199 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.215960 2199 kubelet.go:2133] "Starting kubelet main sync loop" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:50:20.216027 2199 kubelet.go:2157] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.255323 2199 kubelet_node_status.go:696] "Recording event message for node" node="ip-10-0-136-68.us-west-2.compute.internal" event="NodeReady" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.255410 2199 kubelet_node_status.go:520] "Fast updating node status as it just became ready" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.316276 2199 kubelet.go:2219] "SyncLoop ADD" source="file" pods="[]" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.981154 2199 apiserver.go:52] "Watching apiserver" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.990409 2199 kubelet.go:2219] "SyncLoop ADD" source="api" pods="[openshift-monitoring/node-exporter-nt8h7 openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7 openshift-machine-config-operator/machine-config-daemon-2fx68 openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j openshift-multus/multus-additional-cni-plugins-nqwsg openshift-ingress-canary/ingress-canary-pjjrk openshift-dns/dns-default-657v4 openshift-cluster-node-tuning-operator/tuned-zzwb5 openshift-dns/node-resolver-hstcm openshift-network-diagnostics/network-check-target-52ltr openshift-multus/multus-4f66c openshift-multus/network-metrics-daemon-bs7jz openshift-ovn-kubernetes/ovnkube-node-gzbrl openshift-image-registry/node-ca-wsg6f]" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.990470 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.990548 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.990587 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.990634 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.990686 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.990730 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.991036 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.991205 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.991365 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.991444 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.991557 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.991664 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.991740 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:20.991809 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 17:50:20 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pode0abac93_3e79_4a32_8375_5ef1a2e59687.slice. Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod46cf33e4_fc3b_4f7a_b0ab_dc2cbc5a5e77.slice. Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.016750 2199 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podbd2da6fb_b383_40fe_a3ad_b6436a02985b.slice. Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:21.027613 2199 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod0976617f_18ed_4a73_a7d8_ac54cf69ab93.slice. Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:21.034560 2199 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod7f25c5a9_b9c7_4220_a892_362cf6b33878.slice. Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:21.039461 2199 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-poda5ccef55_3f5c_4ffc_82f9_586324e62a37.slice. Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podff7777c7_a1dc_413e_8da1_c4ba07527037.slice. Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:21.050090 2199 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff7777c7_a1dc_413e_8da1_c4ba07527037.slice": 0x40000100 == IN_CREATE|IN_ISDIR): open /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff7777c7_a1dc_413e_8da1_c4ba07527037.slice: no such file or directory Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod0268b68d_53b2_454a_a03b_37bd38d269bc.slice. Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-podadcfa5f5_1c6b_415e_8e69_b72e137820e1.slice. Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod93f0c5c3_9f22_4b93_a925_f621ed5e18e7.slice. Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod9eb4a126_482c_4458_b901_e2e7a15dfd93.slice. Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod757b7544_c265_49ce_a1f0_22cca4bf919f.slice. Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: W0223 17:50:21.084314 2199 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice: no such file or directory Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod3e3e7655_5c60_4995_9a23_b32843026a6e.slice. Feb 23 17:50:21 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-burstable-pod7da00340_9715_48ac_b144_4705de276bf5.slice. Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.109674 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-os-release\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.109718 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kubelet-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.109746 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-plugin-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.109815 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-device-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.109870 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxsmb\" (UniqueName: \"kubernetes.io/projected/bd2da6fb-b383-40fe-a3ad-b6436a02985b-kube-api-access-cxsmb\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.109908 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-ovn\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.109936 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.109962 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-registration-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110001 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/757b7544-c265-49ce-a1f0-22cca4bf919f-metrics-tls\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110035 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2da6fb-b383-40fe-a3ad-b6436a02985b-host\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110092 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-node-metrics-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110125 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-run-dbus\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110143 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-root\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110195 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0268b68d-53b2-454a-a03b-37bd38d269bc-hosts-file\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110229 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-cnibin\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110274 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fbl\" (UniqueName: \"kubernetes.io/projected/9eb4a126-482c-4458-b901-e2e7a15dfd93-kube-api-access-b4fbl\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110323 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-data-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110380 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-dev-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110418 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6xs2\" (UniqueName: \"kubernetes.io/projected/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kube-api-access-r6xs2\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110456 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf689\" (UniqueName: \"kubernetes.io/projected/adcfa5f5-1c6b-415e-8e69-b72e137820e1-kube-api-access-kf689\") pod \"network-check-target-52ltr\" (UID: \"adcfa5f5-1c6b-415e-8e69-b72e137820e1\") " pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110483 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd2da6fb-b383-40fe-a3ad-b6436a02985b-serviceca\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110512 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvgqb\" (UniqueName: \"kubernetes.io/projected/0268b68d-53b2-454a-a03b-37bd38d269bc-kube-api-access-qvgqb\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110562 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-config\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110582 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-multus-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110620 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-cnibin\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110656 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-proxy-tls\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110687 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scnpz\" (UniqueName: \"kubernetes.io/projected/ff7777c7-a1dc-413e-8da1-c4ba07527037-kube-api-access-scnpz\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110713 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-sys\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110744 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110768 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110801 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-mountpoint-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110829 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110862 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-etc\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110892 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-lib-modules\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110917 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-host\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110936 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-system-cni-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110954 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-slash\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.110976 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovn-ca\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111012 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9qm\" (UniqueName: \"kubernetes.io/projected/757b7544-c265-49ce-a1f0-22cca4bf919f-kube-api-access-4z9qm\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111036 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-socket-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111073 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-metrics-certs\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111095 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff7777c7-a1dc-413e-8da1-c4ba07527037-rootfs\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111112 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-lib-tuned-profiles-data\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111132 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77mc\" (UniqueName: \"kubernetes.io/projected/e0abac93-3e79-4a32-8375-5ef1a2e59687-kube-api-access-t77mc\") pod \"ingress-canary-pjjrk\" (UID: \"e0abac93-3e79-4a32-8375-5ef1a2e59687\") " pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111152 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-bin\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111183 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-cookie-secret\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111200 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-sys-fs\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111215 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-var-lib-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111232 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9564\" (UniqueName: \"kubernetes.io/projected/7da00340-9715-48ac-b144-4705de276bf5-kube-api-access-p9564\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111325 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8fq\" (UniqueName: \"kubernetes.io/projected/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-kube-api-access-bs8fq\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111370 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-run-systemd-system\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111415 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntbd\" (UniqueName: \"kubernetes.io/projected/a5ccef55-3f5c-4ffc-82f9-586324e62a37-kube-api-access-tntbd\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111469 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-registration-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111507 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-etc-selinux\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111534 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-sys\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111557 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-textfile\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111579 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-tls\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111605 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-plugins-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111622 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-netd\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111639 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111654 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111678 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-netns\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111694 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-etc-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111709 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-log-socket\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111726 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/757b7544-c265-49ce-a1f0-22cca4bf919f-config-volume\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111745 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0976617f-18ed-4a73-a7d8-ac54cf69ab93-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111767 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111785 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-os-release\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111802 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22vqh\" (UniqueName: \"kubernetes.io/projected/7f25c5a9-b9c7-4220-a892-362cf6b33878-kube-api-access-22vqh\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111855 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-node-log\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111870 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-env-overrides\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111884 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-wtmp\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111897 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-system-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111913 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-volumes-map\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-volumes-map\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111935 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgsp8\" (UniqueName: \"kubernetes.io/projected/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-kube-api-access-mgsp8\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111950 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9eb4a126-482c-4458-b901-e2e7a15dfd93-cni-binary-copy\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111964 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2x89\" (UniqueName: \"kubernetes.io/projected/3e3e7655-5c60-4995-9a23-b32843026a6e-kube-api-access-p2x89\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.111985 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-systemd-units\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.112002 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovnkube-config\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.112022 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"shared-resource-csi-driver-node-metrics-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-shared-resource-csi-driver-node-metrics-serving-cert\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.112039 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.112053 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e3e7655-5c60-4995-9a23-b32843026a6e-metrics-client-ca\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.112070 2199 reconciler.go:41] "Reconciler: start to sync state" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214482 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-bin\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214550 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-cookie-secret\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214580 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-sys-fs\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214499 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-bin\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214604 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-var-lib-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214631 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-p9564\" (UniqueName: \"kubernetes.io/projected/7da00340-9715-48ac-b144-4705de276bf5-kube-api-access-p9564\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214657 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8fq\" (UniqueName: \"kubernetes.io/projected/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-kube-api-access-bs8fq\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214687 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-run-systemd-system\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214690 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-sys-fs\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214712 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-tntbd\" (UniqueName: \"kubernetes.io/projected/a5ccef55-3f5c-4ffc-82f9-586324e62a37-kube-api-access-tntbd\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214742 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-registration-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214775 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-etc-selinux\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214803 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-sys\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214834 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-textfile\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214865 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-tls\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214897 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-plugins-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214924 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-netd\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214953 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214968 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-var-lib-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.214983 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215012 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-netns\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215043 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-etc-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215073 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-log-socket\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215100 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/757b7544-c265-49ce-a1f0-22cca4bf919f-config-volume\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215132 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0976617f-18ed-4a73-a7d8-ac54cf69ab93-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215164 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215193 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-os-release\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215221 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-22vqh\" (UniqueName: \"kubernetes.io/projected/7f25c5a9-b9c7-4220-a892-362cf6b33878-kube-api-access-22vqh\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215230 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-systemd-system\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-run-systemd-system\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215280 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-node-log\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215308 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-env-overrides\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215340 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-wtmp\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215370 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-system-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215401 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"csi-volumes-map\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-volumes-map\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215430 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-mgsp8\" (UniqueName: \"kubernetes.io/projected/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-kube-api-access-mgsp8\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215446 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-textfile\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215489 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-etc-selinux\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215448 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-registration-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215456 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9eb4a126-482c-4458-b901-e2e7a15dfd93-cni-binary-copy\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215532 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-cni-netd\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215565 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-p2x89\" (UniqueName: \"kubernetes.io/projected/3e3e7655-5c60-4995-9a23-b32843026a6e-kube-api-access-p2x89\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215593 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-systemd-units\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215666 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-etc-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215721 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-log-socket\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216157 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovnkube-config\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216183 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-plugins-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216202 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"shared-resource-csi-driver-node-metrics-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-shared-resource-csi-driver-node-metrics-serving-cert\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216292 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216325 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e3e7655-5c60-4995-9a23-b32843026a6e-metrics-client-ca\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216368 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-os-release\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216398 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kubelet-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216426 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-plugin-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216474 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-device-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216505 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-cxsmb\" (UniqueName: \"kubernetes.io/projected/bd2da6fb-b383-40fe-a3ad-b6436a02985b-kube-api-access-cxsmb\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216554 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-ovn\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216583 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216635 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-registration-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216665 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/757b7544-c265-49ce-a1f0-22cca4bf919f-metrics-tls\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216709 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2da6fb-b383-40fe-a3ad-b6436a02985b-host\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.216740 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-node-metrics-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.217135 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-run-dbus\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.217321 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-root\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.217448 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0268b68d-53b2-454a-a03b-37bd38d269bc-hosts-file\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.217611 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-cnibin\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218011 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/757b7544-c265-49ce-a1f0-22cca4bf919f-config-volume\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218526 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-cookie-secret\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218582 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9eb4a126-482c-4458-b901-e2e7a15dfd93-cni-binary-copy\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218618 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovnkube-config\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218757 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-registration-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218835 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fbl\" (UniqueName: \"kubernetes.io/projected/9eb4a126-482c-4458-b901-e2e7a15dfd93-kube-api-access-b4fbl\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218858 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218884 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-data-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218918 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-dev-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218952 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-r6xs2\" (UniqueName: \"kubernetes.io/projected/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kube-api-access-r6xs2\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218984 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-kf689\" (UniqueName: \"kubernetes.io/projected/adcfa5f5-1c6b-415e-8e69-b72e137820e1-kube-api-access-kf689\") pod \"network-check-target-52ltr\" (UID: \"adcfa5f5-1c6b-415e-8e69-b72e137820e1\") " pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219030 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd2da6fb-b383-40fe-a3ad-b6436a02985b-serviceca\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219093 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-qvgqb\" (UniqueName: \"kubernetes.io/projected/0268b68d-53b2-454a-a03b-37bd38d269bc-kube-api-access-qvgqb\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219124 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-config\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219153 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-multus-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219183 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-cnibin\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219211 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-proxy-tls\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219287 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-scnpz\" (UniqueName: \"kubernetes.io/projected/ff7777c7-a1dc-413e-8da1-c4ba07527037-kube-api-access-scnpz\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219321 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-sys\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219351 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219427 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-dev-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219555 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219610 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-mountpoint-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219643 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219686 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-etc\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219715 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-lib-modules\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219743 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-host\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219773 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-system-cni-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219807 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-slash\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219838 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovn-ca\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219870 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9qm\" (UniqueName: \"kubernetes.io/projected/757b7544-c265-49ce-a1f0-22cca4bf919f-kube-api-access-4z9qm\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219901 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-socket-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219927 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-metrics-certs\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219953 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff7777c7-a1dc-413e-8da1-c4ba07527037-rootfs\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.219981 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-lib-tuned-profiles-data\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220015 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-t77mc\" (UniqueName: \"kubernetes.io/projected/e0abac93-3e79-4a32-8375-5ef1a2e59687-kube-api-access-t77mc\") pod \"ingress-canary-pjjrk\" (UID: \"e0abac93-3e79-4a32-8375-5ef1a2e59687\") " pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220195 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220342 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-os-release\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220459 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kubelet-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.218985 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f25c5a9-b9c7-4220-a892-362cf6b33878-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220699 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"non-standard-root-system-trust-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0976617f-18ed-4a73-a7d8-ac54cf69ab93-non-standard-root-system-trust-ca-bundle\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220776 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e3e7655-5c60-4995-9a23-b32843026a6e-metrics-client-ca\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220843 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-device-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220848 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"shared-resource-csi-driver-node-metrics-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-shared-resource-csi-driver-node-metrics-serving-cert\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.220922 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd2da6fb-b383-40fe-a3ad-b6436a02985b-host\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221068 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-ovn\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221116 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.217454 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-run-netns\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221273 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-run-dbus\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-run-dbus\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221317 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-root\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221324 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-cnibin\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.217658 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-systemd-units\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221358 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0268b68d-53b2-454a-a03b-37bd38d269bc-hosts-file\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221396 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-os-release\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.215015 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-sys\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221600 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-data-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221702 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-node-log\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221803 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-wtmp\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221833 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-env-overrides\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.221842 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0976617f-18ed-4a73-a7d8-ac54cf69ab93-plugin-dir\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.222407 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-socket-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.222684 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff7777c7-a1dc-413e-8da1-c4ba07527037-rootfs\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.222692 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-system-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.222779 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-multus-cni-dir\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223383 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/757b7544-c265-49ce-a1f0-22cca4bf919f-metrics-tls\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.222840 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9eb4a126-482c-4458-b901-e2e7a15dfd93-cnibin\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.222881 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"var-lib-tuned-profiles-data\" (UniqueName: \"kubernetes.io/configmap/a5ccef55-3f5c-4ffc-82f9-586324e62a37-var-lib-tuned-profiles-data\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.222977 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"csi-volumes-map\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-csi-volumes-map\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223099 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-host\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223143 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-etc\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223206 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-lib-modules\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223234 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-ca\" (UniqueName: \"kubernetes.io/configmap/7da00340-9715-48ac-b144-4705de276bf5-ovn-ca\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223460 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f25c5a9-b9c7-4220-a892-362cf6b33878-system-cni-dir\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223517 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-slash\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223544 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-run-openvswitch\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223558 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7da00340-9715-48ac-b144-4705de276bf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223622 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-mountpoint-dir\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223680 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5ccef55-3f5c-4ffc-82f9-586324e62a37-sys\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223718 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-config\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.223940 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-tls\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.224566 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd2da6fb-b383-40fe-a3ad-b6436a02985b-serviceca\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.224801 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-node-metrics-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.225462 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff7777c7-a1dc-413e-8da1-c4ba07527037-proxy-tls\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.227219 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"ovn-cert\" (UniqueName: \"kubernetes.io/secret/7da00340-9715-48ac-b144-4705de276bf5-ovn-cert\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.227313 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-metrics-certs\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.227407 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3e3e7655-5c60-4995-9a23-b32843026a6e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.242974 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-tntbd\" (UniqueName: \"kubernetes.io/projected/a5ccef55-3f5c-4ffc-82f9-586324e62a37-kube-api-access-tntbd\") pod \"tuned-zzwb5\" (UID: \"a5ccef55-3f5c-4ffc-82f9-586324e62a37\") " pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.250046 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9qm\" (UniqueName: \"kubernetes.io/projected/757b7544-c265-49ce-a1f0-22cca4bf919f-kube-api-access-4z9qm\") pod \"dns-default-657v4\" (UID: \"757b7544-c265-49ce-a1f0-22cca4bf919f\") " pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.253435 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fbl\" (UniqueName: \"kubernetes.io/projected/9eb4a126-482c-4458-b901-e2e7a15dfd93-kube-api-access-b4fbl\") pod \"multus-4f66c\" (UID: \"9eb4a126-482c-4458-b901-e2e7a15dfd93\") " pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.257147 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2x89\" (UniqueName: \"kubernetes.io/projected/3e3e7655-5c60-4995-9a23-b32843026a6e-kube-api-access-p2x89\") pod \"node-exporter-nt8h7\" (UID: \"3e3e7655-5c60-4995-9a23-b32843026a6e\") " pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.258399 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77mc\" (UniqueName: \"kubernetes.io/projected/e0abac93-3e79-4a32-8375-5ef1a2e59687-kube-api-access-t77mc\") pod \"ingress-canary-pjjrk\" (UID: \"e0abac93-3e79-4a32-8375-5ef1a2e59687\") " pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.265918 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-22vqh\" (UniqueName: \"kubernetes.io/projected/7f25c5a9-b9c7-4220-a892-362cf6b33878-kube-api-access-22vqh\") pod \"multus-additional-cni-plugins-nqwsg\" (UID: \"7f25c5a9-b9c7-4220-a892-362cf6b33878\") " pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.271095 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9564\" (UniqueName: \"kubernetes.io/projected/7da00340-9715-48ac-b144-4705de276bf5-kube-api-access-p9564\") pod \"ovnkube-node-gzbrl\" (UID: \"7da00340-9715-48ac-b144-4705de276bf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.272749 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgsp8\" (UniqueName: \"kubernetes.io/projected/93f0c5c3-9f22-4b93-a925-f621ed5e18e7-kube-api-access-mgsp8\") pod \"network-metrics-daemon-bs7jz\" (UID: \"93f0c5c3-9f22-4b93-a925-f621ed5e18e7\") " pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.273980 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6xs2\" (UniqueName: \"kubernetes.io/projected/0976617f-18ed-4a73-a7d8-ac54cf69ab93-kube-api-access-r6xs2\") pod \"aws-ebs-csi-driver-node-ncxb7\" (UID: \"0976617f-18ed-4a73-a7d8-ac54cf69ab93\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.274180 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8fq\" (UniqueName: \"kubernetes.io/projected/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77-kube-api-access-bs8fq\") pod \"shared-resource-csi-driver-node-vf69j\" (UID: \"46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77\") " pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.282607 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxsmb\" (UniqueName: \"kubernetes.io/projected/bd2da6fb-b383-40fe-a3ad-b6436a02985b-kube-api-access-cxsmb\") pod \"node-ca-wsg6f\" (UID: \"bd2da6fb-b383-40fe-a3ad-b6436a02985b\") " pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.285559 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf689\" (UniqueName: \"kubernetes.io/projected/adcfa5f5-1c6b-415e-8e69-b72e137820e1-kube-api-access-kf689\") pod \"network-check-target-52ltr\" (UID: \"adcfa5f5-1c6b-415e-8e69-b72e137820e1\") " pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.285878 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvgqb\" (UniqueName: \"kubernetes.io/projected/0268b68d-53b2-454a-a03b-37bd38d269bc-kube-api-access-qvgqb\") pod \"node-resolver-hstcm\" (UID: \"0268b68d-53b2-454a-a03b-37bd38d269bc\") " pod="openshift-dns/node-resolver-hstcm" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.287494 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-scnpz\" (UniqueName: \"kubernetes.io/projected/ff7777c7-a1dc-413e-8da1-c4ba07527037-kube-api-access-scnpz\") pod \"machine-config-daemon-2fx68\" (UID: \"ff7777c7-a1dc-413e-8da1-c4ba07527037\") " pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.307901 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.309432776Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.309678776Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.323823 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.324176622Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.324231438Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.329382 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wsg6f" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.329598700Z" level=info msg="Running pod sandbox: openshift-image-registry/node-ca-wsg6f/POD" id=b4f70297-d386-4f2a-bd21-49bce07adc1f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.329657160Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.335922 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.336141418Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/POD" id=60f19a39-38be-4d41-b5cc-de7e6c40ac3a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.336182135Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.341419 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.341647213Z" level=info msg="Running pod sandbox: openshift-multus/multus-additional-cni-plugins-nqwsg/POD" id=f03fc60e-e526-49d7-a370-2190f5dd3bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.341686498Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.347048 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.347431584Z" level=info msg="Running pod sandbox: openshift-cluster-node-tuning-operator/tuned-zzwb5/POD" id=c9eddb5d-cd50-4088-87ff-5e09d8b9deab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.347492414Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.351666 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2fx68" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.351947892Z" level=info msg="Running pod sandbox: openshift-machine-config-operator/machine-config-daemon-2fx68/POD" id=698d9203-1f89-4049-b4da-f57f442ff378 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.351993301Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.358223 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hstcm" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.358469215Z" level=info msg="Running pod sandbox: openshift-dns/node-resolver-hstcm/POD" id=ddfea4c4-e1b9-4710-8d88-373e35b75b75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.358517754Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.367707 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.367963158Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.368009996Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.373185 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.373431917Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.373481131Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.379845 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4f66c" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.380050707Z" level=info msg="Running pod sandbox: openshift-multus/multus-4f66c/POD" id=fbbb2637-faa9-44cc-b3ea-f07c643e74c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.380086894Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.386355 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.386639868Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.386750929Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.391894 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-nt8h7" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.392121405Z" level=info msg="Running pod sandbox: openshift-monitoring/node-exporter-nt8h7/POD" id=fe8614ad-9193-4ac1-bbda-fefbe051cc38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.392180042Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:21.396389 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.396649347Z" level=info msg="Running pod sandbox: openshift-ovn-kubernetes/ovnkube-node-gzbrl/POD" id=97460a4d-21fd-4e2e-af4d-27d9cd1392ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:21.396710016Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:22 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00066|memory|INFO|178064 kB peak resident set size after 10.2 seconds Feb 23 17:50:22 ip-10-0-136-68 ovs-vswitchd[1145]: ovs|00067|memory|INFO|handlers:4 idl-cells:644 ports:10 revalidators:2 rules:9 udpif keys:14 Feb 23 17:50:22 ip-10-0-136-68 kernel: VFS: idmapped mount is not enabled. Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.188035819Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=fe8614ad-9193-4ac1-bbda-fefbe051cc38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.196734584Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=ddfea4c4-e1b9-4710-8d88-373e35b75b75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.198489700Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/73bb14d9-0ad4-41e9-b1d6-ec87e51ce4e7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.198520047Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.214957067Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=f03fc60e-e526-49d7-a370-2190f5dd3bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.215285873Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=60f19a39-38be-4d41-b5cc-de7e6c40ac3a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.232456577Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=698d9203-1f89-4049-b4da-f57f442ff378 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.233435670Z" level=info msg="Ran pod sandbox afbf7496dc70e2eed804a854a31e79fe10c55952e92459a12c5dd8f4628a3ed2 with infra container: openshift-dns/node-resolver-hstcm/POD" id=ddfea4c4-e1b9-4710-8d88-373e35b75b75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.233437550Z" level=info msg="Ran pod sandbox 37604d926d739231180d7aa6f94466547f6c08b6f27fc55ddc5017e765d84c4f with infra container: openshift-monitoring/node-exporter-nt8h7/POD" id=fe8614ad-9193-4ac1-bbda-fefbe051cc38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.235195192Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=0dad5d17-a2c8-4185-ae70-0c0f509e16f4 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.235438609Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9 not found" id=0dad5d17-a2c8-4185-ae70-0c0f509e16f4 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.235542978Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2" id=07ea64cf-ccec-42b5-b92b-8e2fb0513a5f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.235716886Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2 not found" id=07ea64cf-ccec-42b5-b92b-8e2fb0513a5f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.237194563Z" level=info msg="Ran pod sandbox 9e5bdc232b6b7ab0111ce21e2e1992837a8c43b407cc4e5eccbc10770bc79ab1 with infra container: openshift-multus/multus-additional-cni-plugins-nqwsg/POD" id=f03fc60e-e526-49d7-a370-2190f5dd3bc3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.237956245Z" level=info msg="Ran pod sandbox a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 with infra container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/POD" id=60f19a39-38be-4d41-b5cc-de7e6c40ac3a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:23.238418 2199 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.239364976Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738" id=4a90ebe6-fa7c-44d2-a9d9-5cd9237cccda name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.239567514Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2" id=741f7df8-fa40-4143-b35e-8904d77af85e name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.239777696Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=b4f70297-d386-4f2a-bd21-49bce07adc1f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.239965868Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=4c3b592f-9570-48bb-85eb-b197006f2f7d name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.240403355Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=db0c9864-e4c7-4925-b909-d44287806931 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.240655911Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605 not found" id=db0c9864-e4c7-4925-b909-d44287806931 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.240983516Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738 not found" id=4a90ebe6-fa7c-44d2-a9d9-5cd9237cccda name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.242309107Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738" id=f0954a3c-ff76-4ed6-bd73-f58bb30acfde name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.243025356Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.243615657Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.243958083Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=1e94d5c7-4ee0-4757-861c-24bd96fbcede name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.246864153Z" level=info msg="Ran pod sandbox e3d261c789a6825c0eafc2f8c4093501c52843e623015724428d3482116fa0f1 with infra container: openshift-machine-config-operator/machine-config-daemon-2fx68/POD" id=698d9203-1f89-4049-b4da-f57f442ff378 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.248188683Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96" id=9077e12c-1e5a-43e6-988e-480257ce62fb name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.248357109Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=97460a4d-21fd-4e2e-af4d-27d9cd1392ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.248516752Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96 not found" id=9077e12c-1e5a-43e6-988e-480257ce62fb name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.248960439Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.249607210Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.249847745Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96" id=fbdbc2eb-9044-4b52-a446-4a90d21892bf name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.249893507Z" level=info msg="Ran pod sandbox f7eed36a3ebc5617599e8ab9d023f6abfc17c9a33a31a0fd10e28e2cc4220191 with infra container: openshift-image-registry/node-ca-wsg6f/POD" id=b4f70297-d386-4f2a-bd21-49bce07adc1f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.251386694Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.251735958Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563" id=4bee05ba-8cc3-4d19-a7bc-251dc55d106b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.251921906Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=fbbb2637-faa9-44cc-b3ea-f07c643e74c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.251979550Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563 not found" id=4bee05ba-8cc3-4d19-a7bc-251dc55d106b name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.252596800Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563" id=aa1ed54f-6a60-4eca-a41e-f67f9b8d5cf6 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.253411239Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.254363684Z" level=info msg="Ran pod sandbox 644ef2eb51b320bceef0b684a976c066a9c5c1588201f3c1c82fef93b7f846ad with infra container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/POD" id=97460a4d-21fd-4e2e-af4d-27d9cd1392ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.254699861Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=c9eddb5d-cd50-4088-87ff-5e09d8b9deab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.255280657Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c378de0e-43a9-49d5-aaa9-2e8b495a07e7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.255329434Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.255522849Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=daac0a85-7111-4044-8cc1-594a276ce2a6 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.255786452Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a not found" id=daac0a85-7111-4044-8cc1-594a276ce2a6 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.256375635Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=fac12510-58f4-470d-810c-d3daf794bd9b name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.256936673Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/2de7a241-ee70-48d1-b91c-c722ad546ac8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.256966511Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.257840268Z" level=info msg="Ran pod sandbox b72ca3812186308810626cf283e44f3590dd042239ed59dafad8dbe41c46ace5 with infra container: openshift-cluster-node-tuning-operator/tuned-zzwb5/POD" id=c9eddb5d-cd50-4088-87ff-5e09d8b9deab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.257958395Z" level=info msg="Ran pod sandbox 9136827168fd7c2f146c6d27eeec2a74a7638d73f5d914b4bcce6b2623c7fa79 with infra container: openshift-multus/multus-4f66c/POD" id=fbbb2637-faa9-44cc-b3ea-f07c643e74c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.258446191Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.263746730Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=3f545c25-b34c-408a-8607-f20613b8213d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.263910199Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244 not found" id=3f545c25-b34c-408a-8607-f20613b8213d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.263952462Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f" id=0d7556ee-2ae5-4583-819c-ded7938dc9c5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.264053923Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f not found" id=0d7556ee-2ae5-4583-819c-ded7938dc9c5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.264761185Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f" id=219e1c7a-1161-4c21-80ce-918fb7fda6d6 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.264916109Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=7eebcecf-de4a-4739-8966-72f4c3f34b93 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.265447781Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.265462706Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.543148103Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/29e825fd-086c-4368-b25b-d1b10b7a5909 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.543174147Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.877960950Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.899132666Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.900825208Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.901418889Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.918844868Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.921107080Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.946710685Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96\"" Feb 23 17:50:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:23.976207980Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244\"" Feb 23 17:50:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:24.021277641Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a\"" Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.226929 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798} Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.227594 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerStarted Data:9e5bdc232b6b7ab0111ce21e2e1992837a8c43b407cc4e5eccbc10770bc79ab1} Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.232177 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hstcm" event=&{ID:0268b68d-53b2-454a-a03b-37bd38d269bc Type:ContainerStarted Data:afbf7496dc70e2eed804a854a31e79fe10c55952e92459a12c5dd8f4628a3ed2} Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.232846 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nt8h7" event=&{ID:3e3e7655-5c60-4995-9a23-b32843026a6e Type:ContainerStarted Data:37604d926d739231180d7aa6f94466547f6c08b6f27fc55ddc5017e765d84c4f} Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.233443 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fx68" event=&{ID:ff7777c7-a1dc-413e-8da1-c4ba07527037 Type:ContainerStarted Data:e3d261c789a6825c0eafc2f8c4093501c52843e623015724428d3482116fa0f1} Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.236228 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" event=&{ID:a5ccef55-3f5c-4ffc-82f9-586324e62a37 Type:ContainerStarted Data:b72ca3812186308810626cf283e44f3590dd042239ed59dafad8dbe41c46ace5} Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.236735 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f66c" event=&{ID:9eb4a126-482c-4458-b901-e2e7a15dfd93 Type:ContainerStarted Data:9136827168fd7c2f146c6d27eeec2a74a7638d73f5d914b4bcce6b2623c7fa79} Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.237220 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:644ef2eb51b320bceef0b684a976c066a9c5c1588201f3c1c82fef93b7f846ad} Feb 23 17:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:24.237806 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wsg6f" event=&{ID:bd2da6fb-b383-40fe-a3ad-b6436a02985b Type:ContainerStarted Data:f7eed36a3ebc5617599e8ab9d023f6abfc17c9a33a31a0fd10e28e2cc4220191} Feb 23 17:50:25 ip-10-0-136-68 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Feb 23 17:50:25 ip-10-0-136-68 systemd[1]: NetworkManager-dispatcher.service: Consumed 1.120s CPU time. Feb 23 17:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:25.283976073Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2" id=741f7df8-fa40-4143-b35e-8904d77af85e name=/runtime.v1.ImageService/PullImage Feb 23 17:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:25.284712807Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2" id=b8ab7d79-b640-49be-853e-50bb00cb0859 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:25.285937877Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1e42f0d82119151973b0cb36d0d109fca6e5a46c8410cba8eaa2a9867c1cc9ab,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:bfc3811ba51a1d11a1595b4f008e8ee48227599885efb4494696582d1e464db2],Size_:492745678,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=b8ab7d79-b640-49be-853e-50bb00cb0859 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:25.286866721Z" level=info msg="Creating container: openshift-dns/node-resolver-hstcm/dns-node-resolver" id=c707b085-9cda-4206-88f7-e5c585e4f8b2 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:25.286943890Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:28 ip-10-0-136-68 systemd[1]: Started crio-conmon-f115d627d4df84ef203445f1f60b65db54674718860a9eb57f2ccf03c4e901a5.scope. Feb 23 17:50:28 ip-10-0-136-68 systemd[1]: Started libcontainer container f115d627d4df84ef203445f1f60b65db54674718860a9eb57f2ccf03c4e901a5. Feb 23 17:50:28 ip-10-0-136-68 conmon[2401]: conmon f115d627d4df84ef2034 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:28 ip-10-0-136-68 systemd[1]: crio-conmon-f115d627d4df84ef203445f1f60b65db54674718860a9eb57f2ccf03c4e901a5.scope: Deactivated successfully. Feb 23 17:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:28.478630299Z" level=info msg="Created container f115d627d4df84ef203445f1f60b65db54674718860a9eb57f2ccf03c4e901a5: openshift-dns/node-resolver-hstcm/dns-node-resolver" id=c707b085-9cda-4206-88f7-e5c585e4f8b2 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:28.563660044Z" level=info msg="Starting container: f115d627d4df84ef203445f1f60b65db54674718860a9eb57f2ccf03c4e901a5" id=6a79a9e4-eac5-4b65-98aa-10f1f763f41d name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:28.652976991Z" level=info msg="Started container" PID=2413 containerID=f115d627d4df84ef203445f1f60b65db54674718860a9eb57f2ccf03c4e901a5 description=openshift-dns/node-resolver-hstcm/dns-node-resolver id=6a79a9e4-eac5-4b65-98aa-10f1f763f41d name=/runtime.v1.RuntimeService/StartContainer sandboxID=afbf7496dc70e2eed804a854a31e79fe10c55952e92459a12c5dd8f4628a3ed2 Feb 23 17:50:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:29.592267 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hstcm" event=&{ID:0268b68d-53b2-454a-a03b-37bd38d269bc Type:ContainerStarted Data:f115d627d4df84ef203445f1f60b65db54674718860a9eb57f2ccf03c4e901a5} Feb 23 17:50:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:39.928479206Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=4c3b592f-9570-48bb-85eb-b197006f2f7d name=/runtime.v1.ImageService/PullImage Feb 23 17:50:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:39.930525310Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9" id=35f6813d-dd24-4bc8-b382-e5b3b3f84b63 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:42 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.035798207Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:f53384a648c59be4fc6721c4809654cf2c3c49e25c4890941d180079b86f24a0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:59eb5e22e7c2b150cb6aab53f3e77aa3c4a785fa42cda7db08ec38cde445cbd9],Size_:371847935,Uid:nil,Username:nobody,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=35f6813d-dd24-4bc8-b382-e5b3b3f84b63 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.037867893Z" level=info msg="Creating container: openshift-monitoring/node-exporter-nt8h7/init-textfile" id=594dd0a7-3b12-4fc7-b144-b7a897c8ad9d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.038064596Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.905824788Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=7eebcecf-de4a-4739-8966-72f4c3f34b93 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.909930556Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244" id=118e22f0-30d7-42be-8da6-390fe262289a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.941289592Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96" id=fbdbc2eb-9044-4b52-a446-4a90d21892bf name=/runtime.v1.ImageService/PullImage Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.946556351Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96" id=0f43623c-16ee-4556-9365-cc056ad89497 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.977704506Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=1e94d5c7-4ee0-4757-861c-24bd96fbcede name=/runtime.v1.ImageService/PullImage Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.981627698Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=cde6f5e9-4782-4586-9409-fa7bf0415d26 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:43.985681979Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f" id=219e1c7a-1161-4c21-80ce-918fb7fda6d6 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:43 ip-10-0-136-68 kernel: xfs filesystem being remounted at /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/1 supports timestamps until 2038 (0x7fffffff) Feb 23 17:50:43 ip-10-0-136-68 kernel: xfs filesystem being remounted at /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/2 supports timestamps until 2038 (0x7fffffff) Feb 23 17:50:43 ip-10-0-136-68 kernel: xfs filesystem being remounted at /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/3 supports timestamps until 2038 (0x7fffffff) Feb 23 17:50:43 ip-10-0-136-68 kernel: xfs filesystem being remounted at /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/4 supports timestamps until 2038 (0x7fffffff) Feb 23 17:50:44 ip-10-0-136-68 kernel: xfs filesystem being remounted at /var/lib/kubelet/pods/a5ccef55-3f5c-4ffc-82f9-586324e62a37/volume-subpaths/etc/tuned/5 supports timestamps until 2038 (0x7fffffff) Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.005073399Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f" id=14091661-6bff-45c4-bc30-a0e9c417b41c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.351936158Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8b7f57210bc9d9819a65365f893f7ec8fdaf17b52ffa1d38172094a5a6fe4c7d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3b92fdc24d17c7bf6e946d497f870368cd592fbd8f660e5718271a0b49fc8e96],Size_:540802166,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=0f43623c-16ee-4556-9365-cc056ad89497 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.352662133Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-2fx68/machine-config-daemon" id=3c1a4460-0172-4d15-b28b-7d1485afab9f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.352778547Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.958357684Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=cde6f5e9-4782-4586-9409-fa7bf0415d26 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.958390073Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:f1e6876f2bf1f7a3094dceab6324c9f309c5929c81e15ae5c242900fb6f03188,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:cefa1485033e6845dde6fe49683bd21f666cbc635bd244bac19c9ed4b7647244],Size_:489063224,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=118e22f0-30d7-42be-8da6-390fe262289a name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.958717926Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563" id=aa1ed54f-6a60-4eca-a41e-f67f9b8d5cf6 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.960357236Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=fac12510-58f4-470d-810c-d3daf794bd9b name=/runtime.v1.ImageService/PullImage Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.963109658Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=fa8a7e21-bc6c-4e26-9aaf-3ce70f509109 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.963144081Z" level=info msg="Creating container: openshift-multus/multus-4f66c/kube-multus" id=b38c3f0d-df72-414e-9fc9-d00407155a09 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.963214961Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.963302687Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.965880885Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:664a464be4806e0dadf3ab4d7b46c233cb0d2b068952fe1ff5bc1c75d32b15da,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5e70607d001b0cdfc37f99b090c175172f8c71a7905f6223dfeb6ed34dd5de5f],Size_:596526495,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=14091661-6bff-45c4-bc30-a0e9c417b41c name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.966718991Z" level=info msg="Creating container: openshift-cluster-node-tuning-operator/tuned-zzwb5/tuned" id=4396fca8-927a-4d49-aa5c-ac8993a3071f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.966806214Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.969468487Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=26617f69-c893-4dba-9e63-5abce2c757c2 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.972616739Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=26617f69-c893-4dba-9e63-5abce2c757c2 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.973528431Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller" id=75840e1e-95da-4144-af33-7df9dfbab9fb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.973625109Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:44 ip-10-0-136-68 systemd[1]: Started crio-conmon-d00a84419f0507d332c405435a8e5d6fda0cf9cb36171954b7b7aa1e86832c83.scope. Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.978581755Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563" id=d6de0fd9-6b36-4e87-b9b6-c309c507b615 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.986482253Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5483c0b731765f8135bdbd734fa974193843b100648d623cc217de693f0adbd5,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3ca9ea56bcb9906b36c581d710f39067893501ab9e7aa74dad71e7cb71342563],Size_:423974017,Uid:&Int64Value{Value:1001,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d6de0fd9-6b36-4e87-b9b6-c309c507b615 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.990078230Z" level=info msg="Creating container: openshift-image-registry/node-ca-wsg6f/node-ca" id=a6fe244c-bc2a-4d52-8ed9-119449c1a688 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:44.990285085Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started crio-conmon-31fe800873e8bae69352a038092bcd43fe44b5082163749615dae9a0cc38e8af.scope. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started crio-conmon-cd1391ac9c1e2873e874c48875a0141f448a44e641a919a2558cf2b7c59b3877.scope. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started libcontainer container d00a84419f0507d332c405435a8e5d6fda0cf9cb36171954b7b7aa1e86832c83. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started libcontainer container 31fe800873e8bae69352a038092bcd43fe44b5082163749615dae9a0cc38e8af. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started crio-conmon-b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa.scope. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started libcontainer container cd1391ac9c1e2873e874c48875a0141f448a44e641a919a2558cf2b7c59b3877. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started libcontainer container b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa. Feb 23 17:50:45 ip-10-0-136-68 conmon[2629]: conmon d00a84419f0507d332c4 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-conmon-d00a84419f0507d332c405435a8e5d6fda0cf9cb36171954b7b7aa1e86832c83.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 conmon[2680]: conmon b4f57cb23a798e177545 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:45 ip-10-0-136-68 conmon[2664]: conmon cd1391ac9c1e2873e874 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-conmon-cd1391ac9c1e2873e874c48875a0141f448a44e641a919a2558cf2b7c59b3877.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-conmon-b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.150348131Z" level=info msg="Created container d00a84419f0507d332c405435a8e5d6fda0cf9cb36171954b7b7aa1e86832c83: openshift-machine-config-operator/machine-config-daemon-2fx68/machine-config-daemon" id=3c1a4460-0172-4d15-b28b-7d1485afab9f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 conmon[2645]: conmon 31fe800873e8bae69352 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-conmon-31fe800873e8bae69352a038092bcd43fe44b5082163749615dae9a0cc38e8af.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started crio-conmon-24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47.scope. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.166805983Z" level=info msg="Starting container: d00a84419f0507d332c405435a8e5d6fda0cf9cb36171954b7b7aa1e86832c83" id=74e60e13-f4d3-489f-8ea7-2bccf72fb393 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.183889497Z" level=info msg="Created container 31fe800873e8bae69352a038092bcd43fe44b5082163749615dae9a0cc38e8af: openshift-monitoring/node-exporter-nt8h7/init-textfile" id=594dd0a7-3b12-4fc7-b144-b7a897c8ad9d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.184046483Z" level=info msg="Created container b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller" id=75840e1e-95da-4144-af33-7df9dfbab9fb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.184328924Z" level=info msg="Created container cd1391ac9c1e2873e874c48875a0141f448a44e641a919a2558cf2b7c59b3877: openshift-cluster-node-tuning-operator/tuned-zzwb5/tuned" id=4396fca8-927a-4d49-aa5c-ac8993a3071f name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.184397512Z" level=info msg="Started container" PID=2653 containerID=d00a84419f0507d332c405435a8e5d6fda0cf9cb36171954b7b7aa1e86832c83 description=openshift-machine-config-operator/machine-config-daemon-2fx68/machine-config-daemon id=74e60e13-f4d3-489f-8ea7-2bccf72fb393 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3d261c789a6825c0eafc2f8c4093501c52843e623015724428d3482116fa0f1 Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started libcontainer container 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.200823895Z" level=info msg="Starting container: b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa" id=c940123a-b1af-4487-8111-dc06f06e8f60 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.202974153Z" level=info msg="Starting container: cd1391ac9c1e2873e874c48875a0141f448a44e641a919a2558cf2b7c59b3877" id=b26561cb-b50d-4949-b1db-6475d5a4cf90 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.204939271Z" level=info msg="Starting container: 31fe800873e8bae69352a038092bcd43fe44b5082163749615dae9a0cc38e8af" id=aeff781a-319f-4dc5-af87-87873a6f1c06 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.218544329Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=3a71a407-6a96-4160-830e-1cd69f8ac523 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.218837749Z" level=info msg="Started container" PID=2710 containerID=b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller id=c940123a-b1af-4487-8111-dc06f06e8f60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=644ef2eb51b320bceef0b684a976c066a9c5c1588201f3c1c82fef93b7f846ad Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started crio-conmon-75e4ec6a0777f1f5663b0e32e91030b1e16da4a687d4f9f2a71058ade28fc6c5.scope. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.235268986Z" level=info msg="Started container" PID=2698 containerID=cd1391ac9c1e2873e874c48875a0141f448a44e641a919a2558cf2b7c59b3877 description=openshift-cluster-node-tuning-operator/tuned-zzwb5/tuned id=b26561cb-b50d-4949-b1db-6475d5a4cf90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b72ca3812186308810626cf283e44f3590dd042239ed59dafad8dbe41c46ace5 Feb 23 17:50:45 ip-10-0-136-68 conmon[2752]: conmon 24d6f40fa5952a383b00 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started crio-conmon-7a53cc016020b45ed3cdff1bbe3c049ccedc2e5e784714ec0c8d1c78ceb6ab71.scope. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.244353102Z" level=info msg="Started container" PID=2691 containerID=31fe800873e8bae69352a038092bcd43fe44b5082163749615dae9a0cc38e8af description=openshift-monitoring/node-exporter-nt8h7/init-textfile id=aeff781a-319f-4dc5-af87-87873a6f1c06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37604d926d739231180d7aa6f94466547f6c08b6f27fc55ddc5017e765d84c4f Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-conmon-24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-d00a84419f0507d332c405435a8e5d6fda0cf9cb36171954b7b7aa1e86832c83.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started libcontainer container 75e4ec6a0777f1f5663b0e32e91030b1e16da4a687d4f9f2a71058ade28fc6c5. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started libcontainer container 7a53cc016020b45ed3cdff1bbe3c049ccedc2e5e784714ec0c8d1c78ceb6ab71. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-cd1391ac9c1e2873e874c48875a0141f448a44e641a919a2558cf2b7c59b3877.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.300381243Z" level=info msg="Created container 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=fa8a7e21-bc6c-4e26-9aaf-3ce70f509109 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.301032298Z" level=info msg="Starting container: 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47" id=1036f41a-7565-4bab-adb9-53fbab6be767 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:45 ip-10-0-136-68 conmon[2814]: conmon 75e4ec6a0777f1f5663b : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-conmon-75e4ec6a0777f1f5663b0e32e91030b1e16da4a687d4f9f2a71058ade28fc6c5.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 conmon[2841]: conmon 7a53cc016020b45ed3cd : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.315409364Z" level=info msg="Started container" PID=2778 containerID=24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=1036f41a-7565-4bab-adb9-53fbab6be767 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.318717774Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad not found" id=3a71a407-6a96-4160-830e-1cd69f8ac523 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.319889313Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=e8bd5ee4-388f-41e8-b018-0c23873c6797 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-conmon-7a53cc016020b45ed3cdff1bbe3c049ccedc2e5e784714ec0c8d1c78ceb6ab71.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.331587240Z" level=info msg="Created container 75e4ec6a0777f1f5663b0e32e91030b1e16da4a687d4f9f2a71058ade28fc6c5: openshift-multus/multus-4f66c/kube-multus" id=b38c3f0d-df72-414e-9fc9-d00407155a09 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.332034782Z" level=info msg="Starting container: 75e4ec6a0777f1f5663b0e32e91030b1e16da4a687d4f9f2a71058ade28fc6c5" id=88693b69-92e0-4470-8efb-d71a87d3b57b name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.336982181Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad\"" Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.342617242Z" level=info msg="Started container" PID=2859 containerID=75e4ec6a0777f1f5663b0e32e91030b1e16da4a687d4f9f2a71058ade28fc6c5 description=openshift-multus/multus-4f66c/kube-multus id=88693b69-92e0-4470-8efb-d71a87d3b57b name=/runtime.v1.RuntimeService/StartContainer sandboxID=9136827168fd7c2f146c6d27eeec2a74a7638d73f5d914b4bcce6b2623c7fa79 Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.344525141Z" level=info msg="Created container 7a53cc016020b45ed3cdff1bbe3c049ccedc2e5e784714ec0c8d1c78ceb6ab71: openshift-image-registry/node-ca-wsg6f/node-ca" id=a6fe244c-bc2a-4d52-8ed9-119449c1a688 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.345591840Z" level=info msg="Starting container: 7a53cc016020b45ed3cdff1bbe3c049ccedc2e5e784714ec0c8d1c78ceb6ab71" id=609936f6-f5de-4786-9ac6-fdf40e44563e name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.353180940Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=a1d7ef53-18ee-46fc-900b-61905c5f468e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.353705224Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7 not found" id=a1d7ef53-18ee-46fc-900b-61905c5f468e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.354374965Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=1247038c-af5d-447b-bdbf-6bffc654c383 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.355175680Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7\"" Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.355758843Z" level=info msg="Started container" PID=2875 containerID=7a53cc016020b45ed3cdff1bbe3c049ccedc2e5e784714ec0c8d1c78ceb6ab71 description=openshift-image-registry/node-ca-wsg6f/node-ca id=609936f6-f5de-4786-9ac6-fdf40e44563e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7eed36a3ebc5617599e8ab9d023f6abfc17c9a33a31a0fd10e28e2cc4220191 Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.358938820Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738" id=f0954a3c-ff76-4ed6-bd73-f58bb30acfde name=/runtime.v1.ImageService/PullImage Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.362043269Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738" id=309cd46e-91bb-41ad-896c-fd9ac8f04ca9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.363419902Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:62193d6a7bd5f13f6274858bc3a171ed936272ebc5eb1116b65ceeae936c136b,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:34162fe37a9a1757631fc63b80357cf2a889523ada38dbb4afefb424289f3738],Size_:470168569,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=309cd46e-91bb-41ad-896c-fd9ac8f04ca9 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.364034682Z" level=info msg="Creating container: openshift-multus/multus-additional-cni-plugins-nqwsg/egress-router-binary-copy" id=8a372cdc-2def-4637-97c7-088f67e538b6 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.364132194Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.412411680Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_77192881-737c-4750-a788-4f70e4e1bddc\"" Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.428929950Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.428958868Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-75e4ec6a0777f1f5663b0e32e91030b1e16da4a687d4f9f2a71058ade28fc6c5.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started crio-conmon-04fe422f4915b85d25ed8c7af4b49c9f61ab6fb3266d884b15c594f83c05cf94.scope. Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: Started libcontainer container 04fe422f4915b85d25ed8c7af4b49c9f61ab6fb3266d884b15c594f83c05cf94. Feb 23 17:50:45 ip-10-0-136-68 conmon[2990]: conmon 04fe422f4915b85d25ed : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-conmon-04fe422f4915b85d25ed8c7af4b49c9f61ab6fb3266d884b15c594f83c05cf94.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.548842501Z" level=info msg="Created container 04fe422f4915b85d25ed8c7af4b49c9f61ab6fb3266d884b15c594f83c05cf94: openshift-multus/multus-additional-cni-plugins-nqwsg/egress-router-binary-copy" id=8a372cdc-2def-4637-97c7-088f67e538b6 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.549494167Z" level=info msg="Starting container: 04fe422f4915b85d25ed8c7af4b49c9f61ab6fb3266d884b15c594f83c05cf94" id=9148f2c8-2069-45b0-86ee-430b493a0aec name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.556857238Z" level=info msg="Started container" PID=3002 containerID=04fe422f4915b85d25ed8c7af4b49c9f61ab6fb3266d884b15c594f83c05cf94 description=openshift-multus/multus-additional-cni-plugins-nqwsg/egress-router-binary-copy id=9148f2c8-2069-45b0-86ee-430b493a0aec name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e5bdc232b6b7ab0111ce21e2e1992837a8c43b407cc4e5eccbc10770bc79ab1 Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.561488857Z" level=info msg="CNI monitoring event CREATE \"/var/lib/cni/bin/upgrade_51eff2d6-3553-41be-a938-b80592d896e8\"" Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-31fe800873e8bae69352a038092bcd43fe44b5082163749615dae9a0cc38e8af.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.574591401Z" level=info msg="Found CNI network multus-cni-network (type=multus) at /etc/kubernetes/cni/net.d/00-multus.conf" Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.574615572Z" level=info msg="Updated default CNI network name to multus-cni-network" Feb 23 17:50:45 ip-10-0-136-68 systemd[1]: crio-04fe422f4915b85d25ed8c7af4b49c9f61ab6fb3266d884b15c594f83c05cf94.scope: Deactivated successfully. Feb 23 17:50:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:45.825786 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wsg6f" event=&{ID:bd2da6fb-b383-40fe-a3ad-b6436a02985b Type:ContainerStarted Data:7a53cc016020b45ed3cdff1bbe3c049ccedc2e5e784714ec0c8d1c78ceb6ab71} Feb 23 17:50:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:45.826708 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47} Feb 23 17:50:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:45.827513 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqwsg" event=&{ID:7f25c5a9-b9c7-4220-a892-362cf6b33878 Type:ContainerStarted Data:04fe422f4915b85d25ed8c7af4b49c9f61ab6fb3266d884b15c594f83c05cf94} Feb 23 17:50:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:45.828401 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-nt8h7" event=&{ID:3e3e7655-5c60-4995-9a23-b32843026a6e Type:ContainerStarted Data:31fe800873e8bae69352a038092bcd43fe44b5082163749615dae9a0cc38e8af} Feb 23 17:50:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:45.828907 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fx68" event=&{ID:ff7777c7-a1dc-413e-8da1-c4ba07527037 Type:ContainerStarted Data:d00a84419f0507d332c405435a8e5d6fda0cf9cb36171954b7b7aa1e86832c83} Feb 23 17:50:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:45.829416 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zzwb5" event=&{ID:a5ccef55-3f5c-4ffc-82f9-586324e62a37 Type:ContainerStarted Data:cd1391ac9c1e2873e874c48875a0141f448a44e641a919a2558cf2b7c59b3877} Feb 23 17:50:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:45.830293 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4f66c" event=&{ID:9eb4a126-482c-4458-b901-e2e7a15dfd93 Type:ContainerStarted Data:75e4ec6a0777f1f5663b0e32e91030b1e16da4a687d4f9f2a71058ade28fc6c5} Feb 23 17:50:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:45.991164164Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7\"" Feb 23 17:50:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:46.008375471Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad\"" Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.222350278Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=1247038c-af5d-447b-bdbf-6bffc654c383 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.223043725Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7" id=1197e0f0-07b1-4746-b47e-339a37fe7985 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.224681017Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ae90363e0687fc12bc8ed8a2a77d165dc67626c1a60ee8d602e0319b2f949960,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:c40704cbcee782b7629fcbe26b96c42b11444076c53ecd28f0e50f7e5efeb4d7],Size_:368500613,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=1197e0f0-07b1-4746-b47e-339a37fe7985 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.225361362Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-node-driver-registrar" id=1fe55359-2c52-46ba-88df-f032cd0517ab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.225477297Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-20bf1b91d8d43339f28698c219e50f696dbb87d2fc05826b74db01d2e2cc4264.scope. Feb 23 17:50:48 ip-10-0-136-68 systemd[1]: Started libcontainer container 20bf1b91d8d43339f28698c219e50f696dbb87d2fc05826b74db01d2e2cc4264. Feb 23 17:50:48 ip-10-0-136-68 conmon[3103]: conmon 20bf1b91d8d43339f286 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:48 ip-10-0-136-68 systemd[1]: crio-conmon-20bf1b91d8d43339f28698c219e50f696dbb87d2fc05826b74db01d2e2cc4264.scope: Deactivated successfully. Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.570589260Z" level=info msg="Created container 20bf1b91d8d43339f28698c219e50f696dbb87d2fc05826b74db01d2e2cc4264: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-node-driver-registrar" id=1fe55359-2c52-46ba-88df-f032cd0517ab name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.571209558Z" level=info msg="Starting container: 20bf1b91d8d43339f28698c219e50f696dbb87d2fc05826b74db01d2e2cc4264" id=89dbbd9c-a37d-4130-baa1-7d3d43c2e0f1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.579036076Z" level=info msg="Started container" PID=3115 containerID=20bf1b91d8d43339f28698c219e50f696dbb87d2fc05826b74db01d2e2cc4264 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-node-driver-registrar id=89dbbd9c-a37d-4130-baa1-7d3d43c2e0f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.590328510Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986" id=c95caf0d-789d-4208-b88e-03c4b3082dbd name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.590564509Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986 not found" id=c95caf0d-789d-4208-b88e-03c4b3082dbd name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.591039390Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986" id=fbfa8351-d6d8-4e8e-b424-0e8fe5c5fed0 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.591956168Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986\"" Feb 23 17:50:48 ip-10-0-136-68 systemd[1]: crio-20bf1b91d8d43339f28698c219e50f696dbb87d2fc05826b74db01d2e2cc4264.scope: Deactivated successfully. Feb 23 17:50:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:48.856127 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:20bf1b91d8d43339f28698c219e50f696dbb87d2fc05826b74db01d2e2cc4264} Feb 23 17:50:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:48.993485294Z" level=warning msg="Failed to find container exit file for b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa: timed out waiting for the condition" id=c940123a-b1af-4487-8111-dc06f06e8f60 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.002750508Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=d876d638-5e41-4eb5-8422-005e63258f9d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.238418278Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d876d638-5e41-4eb5-8422-005e63258f9d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.239305702Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=391ac4c0-3980-411c-b20a-60e8a5ee8993 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.240847860Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=391ac4c0-3980-411c-b20a-60e8a5ee8993 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.241512227Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-acl-logging" id=fec7e0a9-95af-4675-9ee5-cd758b421ff1 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.241628574Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:49 ip-10-0-136-68 systemd[1]: Started crio-conmon-92333817837c49a90924e648f8db16ff590a32ee6eb37858fa2329af7e977989.scope. Feb 23 17:50:49 ip-10-0-136-68 systemd[1]: Started libcontainer container 92333817837c49a90924e648f8db16ff590a32ee6eb37858fa2329af7e977989. Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.291864145Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=e8bd5ee4-388f-41e8-b018-0c23873c6797 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.292904021Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad" id=c2414135-da13-4981-a7a2-3d6835e87ebd name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.294037210Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c1d577960d1c46e90165da215c04054d71634cb8701ebd504e510368ee7bd65,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:87f1812b55e14f104d3ab49c06711f5ba37e94490717470e42c12749fe3c90ad],Size_:366055841,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c2414135-da13-4981-a7a2-3d6835e87ebd name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.294613006Z" level=info msg="Creating container: openshift-machine-config-operator/machine-config-daemon-2fx68/oauth-proxy" id=0171aa34-49ec-4568-9bb9-3feef3dcd1df name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.294743360Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:49 ip-10-0-136-68 systemd[1]: Started crio-conmon-036145f74184002dce6a778923e20470a076b518eadb95f55265581e8ef77d08.scope. Feb 23 17:50:49 ip-10-0-136-68 systemd[1]: Started libcontainer container 036145f74184002dce6a778923e20470a076b518eadb95f55265581e8ef77d08. Feb 23 17:50:49 ip-10-0-136-68 conmon[3185]: conmon 92333817837c49a90924 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:49 ip-10-0-136-68 systemd[1]: crio-conmon-92333817837c49a90924e648f8db16ff590a32ee6eb37858fa2329af7e977989.scope: Deactivated successfully. Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.331329294Z" level=info msg="Created container 92333817837c49a90924e648f8db16ff590a32ee6eb37858fa2329af7e977989: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-acl-logging" id=fec7e0a9-95af-4675-9ee5-cd758b421ff1 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.331766452Z" level=info msg="Starting container: 92333817837c49a90924e648f8db16ff590a32ee6eb37858fa2329af7e977989" id=6db0d755-f0d6-4f28-bde2-6fc94313c064 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.338853876Z" level=info msg="Started container" PID=3197 containerID=92333817837c49a90924e648f8db16ff590a32ee6eb37858fa2329af7e977989 description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-acl-logging id=6db0d755-f0d6-4f28-bde2-6fc94313c064 name=/runtime.v1.RuntimeService/StartContainer sandboxID=644ef2eb51b320bceef0b684a976c066a9c5c1588201f3c1c82fef93b7f846ad Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.347377455Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=d61f806f-cb34-4f55-a2ec-cc118c00e585 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.347537424Z" level=info msg="Image registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0 not found" id=d61f806f-cb34-4f55-a2ec-cc118c00e585 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.348064215Z" level=info msg="Pulling image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=2fa29031-9baa-41bb-9128-b49a4be71a16 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.350385521Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0\"" Feb 23 17:50:49 ip-10-0-136-68 systemd[1]: crio-92333817837c49a90924e648f8db16ff590a32ee6eb37858fa2329af7e977989.scope: Deactivated successfully. Feb 23 17:50:49 ip-10-0-136-68 conmon[3204]: conmon 036145f74184002dce6a : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:49 ip-10-0-136-68 systemd[1]: crio-conmon-036145f74184002dce6a778923e20470a076b518eadb95f55265581e8ef77d08.scope: Deactivated successfully. Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.378836863Z" level=info msg="Created container 036145f74184002dce6a778923e20470a076b518eadb95f55265581e8ef77d08: openshift-machine-config-operator/machine-config-daemon-2fx68/oauth-proxy" id=0171aa34-49ec-4568-9bb9-3feef3dcd1df name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.379213227Z" level=info msg="Starting container: 036145f74184002dce6a778923e20470a076b518eadb95f55265581e8ef77d08" id=e0fe3566-cf0a-41ab-9770-781889d6da12 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.386367205Z" level=info msg="Started container" PID=3223 containerID=036145f74184002dce6a778923e20470a076b518eadb95f55265581e8ef77d08 description=openshift-machine-config-operator/machine-config-daemon-2fx68/oauth-proxy id=e0fe3566-cf0a-41ab-9770-781889d6da12 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3d261c789a6825c0eafc2f8c4093501c52843e623015724428d3482116fa0f1 Feb 23 17:50:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:49.402981491Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986\"" Feb 23 17:50:49 ip-10-0-136-68 systemd[1]: crio-036145f74184002dce6a778923e20470a076b518eadb95f55265581e8ef77d08.scope: Deactivated successfully. Feb 23 17:50:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:50.074231227Z" level=info msg="Trying to access \"registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0\"" Feb 23 17:50:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:51.592605843Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986" id=fbfa8351-d6d8-4e8e-b424-0e8fe5c5fed0 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:51.593551932Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986" id=878f7fb4-9535-478e-83ad-8734feca25a5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:51.594849578Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e58f76855491f5bce249b50904350a7a43dfb3161623bf950b71fe1b27cf5b01,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a70c56affa754b4a0cd72b5e6870bdedef0ea5b68a9252a43615fdfadc6f0986],Size_:366474395,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=878f7fb4-9535-478e-83ad-8734feca25a5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:51.595500763Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-liveness-probe" id=15a8523f-5da4-4f1a-9ebd-fca049f7a693 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:51.595681797Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:51 ip-10-0-136-68 systemd[1]: Started crio-conmon-b221b720d57417ac2fe94fc853fc58947e34647e096d3970f465b47edfc85047.scope. Feb 23 17:50:51 ip-10-0-136-68 systemd[1]: Started libcontainer container b221b720d57417ac2fe94fc853fc58947e34647e096d3970f465b47edfc85047. Feb 23 17:50:51 ip-10-0-136-68 conmon[3298]: conmon b221b720d57417ac2fe9 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:51 ip-10-0-136-68 systemd[1]: crio-conmon-b221b720d57417ac2fe94fc853fc58947e34647e096d3970f465b47edfc85047.scope: Deactivated successfully. Feb 23 17:50:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:51.795184951Z" level=info msg="Created container b221b720d57417ac2fe94fc853fc58947e34647e096d3970f465b47edfc85047: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-liveness-probe" id=15a8523f-5da4-4f1a-9ebd-fca049f7a693 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:51.795649503Z" level=info msg="Starting container: b221b720d57417ac2fe94fc853fc58947e34647e096d3970f465b47edfc85047" id=676cc3f5-870a-4cea-a93f-0dd9af40d6fc name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:51.802350229Z" level=info msg="Started container" PID=3310 containerID=b221b720d57417ac2fe94fc853fc58947e34647e096d3970f465b47edfc85047 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-liveness-probe id=676cc3f5-870a-4cea-a93f-0dd9af40d6fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.715849783Z" level=info msg="Pulled image: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=2fa29031-9baa-41bb-9128-b49a4be71a16 name=/runtime.v1.ImageService/PullImage Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.716685020Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=fafd2956-8891-436e-941d-151c70f03337 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.717954839Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=fafd2956-8891-436e-941d-151c70f03337 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.718678973Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy" id=1a44eed0-04b3-4456-940f-3dbd7a7967e9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.718782286Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-1aafaf8dd34c30f063760c9734fa58619760b689070977869ea27552d10f744f.scope. Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 1aafaf8dd34c30f063760c9734fa58619760b689070977869ea27552d10f744f. Feb 23 17:50:52 ip-10-0-136-68 conmon[3357]: conmon 1aafaf8dd34c30f06376 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: crio-conmon-1aafaf8dd34c30f063760c9734fa58619760b689070977869ea27552d10f744f.scope: Deactivated successfully. Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.825004499Z" level=info msg="Created container 1aafaf8dd34c30f063760c9734fa58619760b689070977869ea27552d10f744f: openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy" id=1a44eed0-04b3-4456-940f-3dbd7a7967e9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.825506464Z" level=info msg="Starting container: 1aafaf8dd34c30f063760c9734fa58619760b689070977869ea27552d10f744f" id=9868186e-dfd1-474c-8401-6afd7e3e8382 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.844519421Z" level=info msg="Started container" PID=3369 containerID=1aafaf8dd34c30f063760c9734fa58619760b689070977869ea27552d10f744f description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy id=9868186e-dfd1-474c-8401-6afd7e3e8382 name=/runtime.v1.RuntimeService/StartContainer sandboxID=644ef2eb51b320bceef0b684a976c066a9c5c1588201f3c1c82fef93b7f846ad Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.852473812Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=05abb777-2bdc-4799-91aa-45d0b219a72f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.853839382Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=05abb777-2bdc-4799-91aa-45d0b219a72f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.854433207Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0" id=a87b020b-25de-4876-bb50-d16977d1bca5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.855831950Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e8505ce6d56c9d2fa3ddbbc4dd2c4096db686f042ecf82eed342af6e60223854,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:3d58c835d309fe9499bf61ae2b32c151a1719bacc32cb7bda4bc3f7936bfc2e0],Size_:402716043,Uid:&Int64Value{Value:65534,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a87b020b-25de-4876-bb50-d16977d1bca5 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.856583266Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy-ovn-metrics" id=2bd2c7d5-ad57-412d-9e9a-d37517f4e035 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.856765293Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: crio-1aafaf8dd34c30f063760c9734fa58619760b689070977869ea27552d10f744f.scope: Deactivated successfully. Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-2524d47ffae9271726a1fc5f4d505d43523ccbee1506a4648807016a91478f1e.scope. Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 2524d47ffae9271726a1fc5f4d505d43523ccbee1506a4648807016a91478f1e. Feb 23 17:50:52 ip-10-0-136-68 conmon[3401]: conmon 2524d47ffae9271726a1 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: crio-conmon-2524d47ffae9271726a1fc5f4d505d43523ccbee1506a4648807016a91478f1e.scope: Deactivated successfully. Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.948615610Z" level=info msg="Created container 2524d47ffae9271726a1fc5f4d505d43523ccbee1506a4648807016a91478f1e: openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy-ovn-metrics" id=2bd2c7d5-ad57-412d-9e9a-d37517f4e035 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.949076135Z" level=info msg="Starting container: 2524d47ffae9271726a1fc5f4d505d43523ccbee1506a4648807016a91478f1e" id=7e877265-e35f-4225-864b-232e3dc77cbb name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.955940369Z" level=info msg="Started container" PID=3413 containerID=2524d47ffae9271726a1fc5f4d505d43523ccbee1506a4648807016a91478f1e description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/kube-rbac-proxy-ovn-metrics id=7e877265-e35f-4225-864b-232e3dc77cbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=644ef2eb51b320bceef0b684a976c066a9c5c1588201f3c1c82fef93b7f846ad Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.963707120Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=a5ae5986-7b3b-4a6d-afa3-12323cb33d41 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.963924653Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a5ae5986-7b3b-4a6d-afa3-12323cb33d41 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.964530414Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=5eddc025-02d6-41dd-a448-c8762f05e8f7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.964735646Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=5eddc025-02d6-41dd-a448-c8762f05e8f7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.965858319Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovnkube-node" id=19ce4692-319f-45b9-ba93-654ed962b615 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:52.966039188Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: crio-2524d47ffae9271726a1fc5f4d505d43523ccbee1506a4648807016a91478f1e.scope: Deactivated successfully. Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91.scope. Feb 23 17:50:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91. Feb 23 17:50:53 ip-10-0-136-68 conmon[3445]: conmon 833755728b8e8f368af6 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:53 ip-10-0-136-68 systemd[1]: crio-conmon-833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91.scope: Deactivated successfully. Feb 23 17:50:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:53.031952600Z" level=info msg="Created container 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovnkube-node" id=19ce4692-319f-45b9-ba93-654ed962b615 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:53.032376844Z" level=info msg="Starting container: 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" id=584dbb3f-1424-482f-ba73-b37c142014f4 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:53.039465053Z" level=info msg="Started container" PID=3457 containerID=833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovnkube-node id=584dbb3f-1424-482f-ba73-b37c142014f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=644ef2eb51b320bceef0b684a976c066a9c5c1588201f3c1c82fef93b7f846ad Feb 23 17:50:53 ip-10-0-136-68 systemd[1]: crio-833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91.scope: Deactivated successfully. Feb 23 17:50:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:53.595472068Z" level=warning msg="Failed to find container exit file for b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa: timed out waiting for the condition" id=8e08d986-13e5-48a6-9f01-3b6ab09013ec name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:50:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:53.602060 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/ovn-controller/2.log" Feb 23 17:50:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:53.602236 2199 generic.go:332] "Generic (PLEG): container finished" podID=7da00340-9715-48ac-b144-4705de276bf5 containerID="b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa" exitCode=-1 Feb 23 17:50:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:53.602323 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerDied Data:b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa} Feb 23 17:50:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:53.602352 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:92333817837c49a90924e648f8db16ff590a32ee6eb37858fa2329af7e977989} Feb 23 17:50:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:53.603678 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fx68" event=&{ID:ff7777c7-a1dc-413e-8da1-c4ba07527037 Type:ContainerStarted Data:036145f74184002dce6a778923e20470a076b518eadb95f55265581e8ef77d08} Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.366909178Z" level=warning msg="Failed to find container exit file for b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa: timed out waiting for the condition" id=29461a14-8e40-4471-a948-d5a236627556 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:58.373403 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/ovn-controller/2.log" Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:58.373510 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91} Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:58.373535 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:2524d47ffae9271726a1fc5f4d505d43523ccbee1506a4648807016a91478f1e} Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:58.373549 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:1aafaf8dd34c30f063760c9734fa58619760b689070977869ea27552d10f744f} Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:58.373853 2199 kubelet.go:2323] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:58.373862 2199 scope.go:115] "RemoveContainer" containerID="b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa" Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:50:58.374486 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:50:58.374697 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:50:58.374885 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:50:58.374916 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.375698274Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=74d1d4ea-f9ad-4187-865c-e642006c490f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.375907948Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=74d1d4ea-f9ad-4187-865c-e642006c490f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:50:58.375948 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:b221b720d57417ac2fe94fc853fc58947e34647e096d3970f465b47edfc85047} Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.376693069Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a" id=8bd4ddda-55ed-4d92-bc6e-871c60b56009 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.376917027Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d979f2d334f2f1645227fcd91eb640e4be627e8618658519ab8194bf4b104db0,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:6667c4aac31549982ef3d098b826a0ccfa3c9542350312642d0b4270b018433a],Size_:1146370016,Uid:nil,Username:root,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=8bd4ddda-55ed-4d92-bc6e-871c60b56009 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.377557394Z" level=info msg="Creating container: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller" id=90939430-1df2-49b3-b589-7536416e6eb0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.377657060Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:50:58 ip-10-0-136-68 systemd[1]: Started crio-conmon-7bb141e8ae5aef65d0c7ee6631f2337b1a9db3225c7f9ae8ba30a109012f788c.scope. Feb 23 17:50:58 ip-10-0-136-68 systemd[1]: Started libcontainer container 7bb141e8ae5aef65d0c7ee6631f2337b1a9db3225c7f9ae8ba30a109012f788c. Feb 23 17:50:58 ip-10-0-136-68 conmon[3515]: conmon 7bb141e8ae5aef65d0c7 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:50:58 ip-10-0-136-68 systemd[1]: crio-conmon-7bb141e8ae5aef65d0c7ee6631f2337b1a9db3225c7f9ae8ba30a109012f788c.scope: Deactivated successfully. Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.460479770Z" level=info msg="Created container 7bb141e8ae5aef65d0c7ee6631f2337b1a9db3225c7f9ae8ba30a109012f788c: openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller" id=90939430-1df2-49b3-b589-7536416e6eb0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.460871871Z" level=info msg="Starting container: 7bb141e8ae5aef65d0c7ee6631f2337b1a9db3225c7f9ae8ba30a109012f788c" id=8cb622da-c8f9-4ce9-8fc2-6d1c9da355d9 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:50:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:50:58.467761469Z" level=info msg="Started container" PID=3527 containerID=7bb141e8ae5aef65d0c7ee6631f2337b1a9db3225c7f9ae8ba30a109012f788c description=openshift-ovn-kubernetes/ovnkube-node-gzbrl/ovn-controller id=8cb622da-c8f9-4ce9-8fc2-6d1c9da355d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=644ef2eb51b320bceef0b684a976c066a9c5c1588201f3c1c82fef93b7f846ad Feb 23 17:50:58 ip-10-0-136-68 systemd[1]: crio-7bb141e8ae5aef65d0c7ee6631f2337b1a9db3225c7f9ae8ba30a109012f788c.scope: Deactivated successfully. Feb 23 17:51:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:01.459573398Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" Feb 23 17:51:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:01.460002886Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/b7e45852-b47d-4b85-bf49-1012ef6cb4a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:51:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:01.460026797Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:51:01 ip-10-0-136-68 systemd[1]: crio-b221b720d57417ac2fe94fc853fc58947e34647e096d3970f465b47edfc85047.scope: Deactivated successfully. Feb 23 17:51:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:01.959745040Z" level=info msg="cleanup sandbox network" Feb 23 17:51:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:03.128138555Z" level=warning msg="Failed to find container exit file for b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa: timed out waiting for the condition" id=ae084e47-d588-45b7-bbf4-666cf056b0e8 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:51:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:03.134898 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/ovn-controller/2.log" Feb 23 17:51:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:03.134980 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" event=&{ID:7da00340-9715-48ac-b144-4705de276bf5 Type:ContainerStarted Data:7bb141e8ae5aef65d0c7ee6631f2337b1a9db3225c7f9ae8ba30a109012f788c} Feb 23 17:51:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:03.135656 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:03.135889 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:03.136101 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:03.136128 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:51:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:04.137136 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:04.137410 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:04.137617 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:04.137671 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:51:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:04.872394 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:51:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:04.872452 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.219341682Z" level=info msg="NetworkStart: stopping network for sandbox 3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.219454301Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/73bb14d9-0ad4-41e9-b1d6-ec87e51ce4e7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.219491374Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.219502225Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.219512700Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.274883786Z" level=info msg="NetworkStart: stopping network for sandbox 0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.274992606Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/2de7a241-ee70-48d1-b91c-c722ad546ac8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.275026334Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.275038604Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.275049733Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.275188611Z" level=info msg="NetworkStart: stopping network for sandbox 962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.275297193Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c378de0e-43a9-49d5-aaa9-2e8b495a07e7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.275323499Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.275334424Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.275341905Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.554575784Z" level=info msg="NetworkStart: stopping network for sandbox 4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.554703162Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/29e825fd-086c-4368-b25b-d1b10b7a5909 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.554733072Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.554744707Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:08.554751990Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:51:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:14.872493 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:51:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:14.872553 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:51:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:24.872424 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:51:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:24.872617 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:26.292421 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:26.292709 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:26.292961 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:26.293002 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:51:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:34.872631 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:51:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:34.872685 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:51:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:44.872951 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:51:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:44.873014 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:51:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:44.873048 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:51:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:44.873741 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 17:51:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:44.873988 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47" gracePeriod=30 Feb 23 17:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:44.874354322Z" level=info msg="Stopping container: 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47 (timeout: 30s)" id=e11f489b-ad9f-45e1-a30b-c1f65eca88e6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:51:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:46.471712708Z" level=info msg="NetworkStart: stopping network for sandbox b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:46.471767252Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:51:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:46.471945888Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:48.635962152Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=e11f489b-ad9f-45e1-a30b-c1f65eca88e6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:51:48 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c96ddc8c62b8c16d39d559dfd3369c2cb0b5a9ac5f88673bba0567465fb47905-merged.mount: Deactivated successfully. Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.421193079Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=e11f489b-ad9f-45e1-a30b-c1f65eca88e6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.423421752Z" level=info msg="Stopped container 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=e11f489b-ad9f-45e1-a30b-c1f65eca88e6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.424153664Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=0a945cc6-3187-42e5-92ed-ca146e9a27a7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.424348670Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=0a945cc6-3187-42e5-92ed-ca146e9a27a7 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.424946981Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=a1f87a73-3773-4e94-896a-be58bb9feb7f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.425104282Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a1f87a73-3773-4e94-896a-be58bb9feb7f name=/runtime.v1.ImageService/ImageStatus Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.425836118Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c9d0d2d9-cde9-4045-ad35-5f5947d03f63 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.425953062Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:51:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868.scope. Feb 23 17:51:52 ip-10-0-136-68 systemd[1]: Started libcontainer container f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868. Feb 23 17:51:52 ip-10-0-136-68 conmon[3679]: conmon f37629ca906e242a557b : Failed to write to cgroup.event_control Operation not supported Feb 23 17:51:52 ip-10-0-136-68 systemd[1]: crio-conmon-f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868.scope: Deactivated successfully. Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.550570503Z" level=info msg="Created container f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c9d0d2d9-cde9-4045-ad35-5f5947d03f63 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.551080143Z" level=info msg="Starting container: f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868" id=3d4984ac-1bce-432b-bf47-78e4e5c6d028 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.569701205Z" level=info msg="Started container" PID=3691 containerID=f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=3d4984ac-1bce-432b-bf47-78e4e5c6d028 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:51:52 ip-10-0-136-68 systemd[1]: crio-f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868.scope: Deactivated successfully. Feb 23 17:51:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:52.953180708Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=809ebfb1-329a-4958-a893-8629a119391d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:51:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:52.953562 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47" exitCode=-1 Feb 23 17:51:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:52.953599 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47} Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.228989196Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.229043548Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-utsns-73bb14d9\x2d0ad4\x2d41e9\x2db1d6\x2dec87e51ce4e7.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-ipcns-73bb14d9\x2d0ad4\x2d41e9\x2db1d6\x2dec87e51ce4e7.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.253385162Z" level=info msg="runSandbox: deleting pod ID 3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b from idIndex" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.253428829Z" level=info msg="runSandbox: removing pod sandbox 3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.253494102Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.253506285Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.259302914Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.260999593Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.261031107Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=71809a78-a5c4-49ac-98f4-be08b6ec005a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.261229 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.261333 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.261368 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.261446 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.286816066Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.286859569Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.287833073Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.287868902Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.310313710Z" level=info msg="runSandbox: deleting pod ID 962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96 from idIndex" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.310351683Z" level=info msg="runSandbox: removing pod sandbox 962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.310384226Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.310405663Z" level=info msg="runSandbox: unmounting shmPath for sandbox 962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.310326800Z" level=info msg="runSandbox: deleting pod ID 0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe from idIndex" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.310444756Z" level=info msg="runSandbox: removing pod sandbox 0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.310466371Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.310478975Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.317333829Z" level=info msg="runSandbox: removing pod sandbox from storage: 962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.318306461Z" level=info msg="runSandbox: removing pod sandbox from storage: 0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.318959531Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.318985279Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e76ae792-e6bb-44c4-b1d9-753c9886932e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.319214 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.319426 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.319467 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.319547 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.321591205Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.321637886Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=2376a751-3085-47ba-b9ae-b0adc5c75198 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.321786 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.321844 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.321865 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.321917 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-netns-2de7a241\x2dee70\x2d48d1\x2db91c\x2dc722ad546ac8.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-ipcns-2de7a241\x2dee70\x2d48d1\x2db91c\x2dc722ad546ac8.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-utsns-2de7a241\x2dee70\x2d48d1\x2db91c\x2dc722ad546ac8.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-netns-c378de0e\x2d43a9\x2d49d5\x2daaa9\x2d2e8b495a07e7.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-ipcns-c378de0e\x2d43a9\x2d49d5\x2daaa9\x2d2e8b495a07e7.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-utsns-c378de0e\x2d43a9\x2d49d5\x2daaa9\x2d2e8b495a07e7.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0b9c05c5b1726159147de513032221b1519eefdea7c22ab2a693f2537a7cf7fe-userdata-shm.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-962b9d65a74bd240af06f8970a8d62a7a831d7d439d72dae6bc55933b1cc3c96-userdata-shm.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-netns-73bb14d9\x2d0ad4\x2d41e9\x2db1d6\x2dec87e51ce4e7.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3ebc70781c4882f2db28f6ff9dcda9e35ad3320291d1a691a05d79676f6c1f3b-userdata-shm.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.564290043Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.564337620Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-utsns-29e825fd\x2d086c\x2d4368\x2db25b\x2dd1b10b7a5909.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-ipcns-29e825fd\x2d086c\x2d4368\x2db25b\x2dd1b10b7a5909.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-netns-29e825fd\x2d086c\x2d4368\x2db25b\x2dd1b10b7a5909.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.579326570Z" level=info msg="runSandbox: deleting pod ID 4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15 from idIndex" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.579371787Z" level=info msg="runSandbox: removing pod sandbox 4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.579412079Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.579429510Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15-userdata-shm.mount: Deactivated successfully. Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.586295926Z" level=info msg="runSandbox: removing pod sandbox from storage: 4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.587761777Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:53.587796022Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=f97a9641-f8c7-497c-a912-f8f30969d484 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.588041 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.588112 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.588151 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:53.588266 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4ce668592f29a41afe2fed366dc215655e2701fb515d2f6bfb1b64d6713c4d15): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 17:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:56.291975 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:56.292256 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:56.292469 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:51:56.292491 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:51:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:51:57.717137821Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=32aa964c-bc3b-479c-a108-7d294e312f0f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:51:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:51:57.717553 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868} Feb 23 17:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:01.961208298Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 17:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:01.961214997Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/b7e45852-b47d-4b85-bf49-1012ef6cb4a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:01.961336141Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:01.961346603Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:01.961358254Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:02.711657039Z" level=info msg="cleanup sandbox network" Feb 23 17:52:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:04.217228 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 17:52:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:04.217340 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:52:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:04.217708095Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:04.217748700Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:04.217818610Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:52:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:04.217776808Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:52:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:04.225289434Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/ac257a78-0c78-41e7-a754-e16f8a6c4721 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:04.225324692Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:04.225660226Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c26e51eb-2a98-492c-a6f4-331659bfc4d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:04.225773831Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:04.872150 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:52:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:04.872208 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:52:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:05.217138 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:52:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:05.217560755Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:05.217640337Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:52:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:05.223479386Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ddd4d8ad-53e5-46b8-a8e0-1a456ce8a29f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:05.223511069Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:07.216543 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:52:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:07.216907720Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:07.216968992Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:52:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:07.222513176Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/e4a004cc-24cf-4792-baf5-56e321692991 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:07.222539398Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:10.217290 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:10.217929 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:10.218202 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:10.218265 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:52:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:14.872842 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:52:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:14.872906 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:52:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:24.872861 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:52:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:24.872923 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:26.292508 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:26.292781 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:26.292994 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:26.293030 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:52:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:34.872484 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:52:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:34.872543 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:52:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:44.872076 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:52:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:44.872138 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:52:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:44.872162 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:52:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:44.872672 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 17:52:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:44.872827 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868" gracePeriod=30 Feb 23 17:52:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:44.873066851Z" level=info msg="Stopping container: f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868 (timeout: 30s)" id=f5eb6f81-4751-49b2-aa97-136540eb90ab name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.473444351Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.473464052Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.473574197Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.473678698Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:46 ip-10-0-136-68 systemd[1]: run-utsns-b7e45852\x2db47d\x2d4b85\x2dbf49\x2d1012ef6cb4a4.mount: Deactivated successfully. Feb 23 17:52:46 ip-10-0-136-68 systemd[1]: run-ipcns-b7e45852\x2db47d\x2d4b85\x2dbf49\x2d1012ef6cb4a4.mount: Deactivated successfully. Feb 23 17:52:46 ip-10-0-136-68 systemd[1]: run-netns-b7e45852\x2db47d\x2d4b85\x2dbf49\x2d1012ef6cb4a4.mount: Deactivated successfully. Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.499401598Z" level=info msg="runSandbox: deleting pod ID b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21 from idIndex" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.499459413Z" level=info msg="runSandbox: removing pod sandbox b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.499494768Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.499509952Z" level=info msg="runSandbox: unmounting shmPath for sandbox b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21-userdata-shm.mount: Deactivated successfully. Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.511335369Z" level=info msg="runSandbox: removing pod sandbox from storage: b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.513049971Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:46.513082373Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d2780a1c-a53d-4de9-98a4-f4859a006925 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:46.513335 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:52:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:46.513405 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:52:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:46.513458 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:52:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:46.513549 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b7b64e2855e5363c7e5638220dfdc3f063f4dfbdcba871d05c966a6b5f95fc21): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 17:52:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:48.635024959Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=f5eb6f81-4751-49b2-aa97-136540eb90ab name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:52:48 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b76325a3399153a3003fb623e0770254e79561eceaa3e4c37d796d45441da4e1-merged.mount: Deactivated successfully. Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.241345044Z" level=info msg="NetworkStart: stopping network for sandbox 1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.241565609Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c26e51eb-2a98-492c-a6f4-331659bfc4d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.241621469Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.241635647Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.241646051Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.242324054Z" level=info msg="NetworkStart: stopping network for sandbox 2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.242434308Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/ac257a78-0c78-41e7-a754-e16f8a6c4721 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.242472083Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.242484548Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:52:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:49.242495021Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:50.237070566Z" level=info msg="NetworkStart: stopping network for sandbox fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:50.237186782Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ddd4d8ad-53e5-46b8-a8e0-1a456ce8a29f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:50.237223463Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:52:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:50.237234691Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:52:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:50.237264643Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.235333218Z" level=info msg="NetworkStart: stopping network for sandbox b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.235433331Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/e4a004cc-24cf-4792-baf5-56e321692991 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.235465040Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.235472443Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.235478949Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.407917516Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=f5eb6f81-4751-49b2-aa97-136540eb90ab name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.410463832Z" level=info msg="Stopped container f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=f5eb6f81-4751-49b2-aa97-136540eb90ab name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.411168831Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=2d2ee257-66b1-4383-bf59-0a425bc13674 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.411356932Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=2d2ee257-66b1-4383-bf59-0a425bc13674 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.411905947Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=945db738-8824-40fd-8ecc-f7e12889ee83 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.412051036Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=945db738-8824-40fd-8ecc-f7e12889ee83 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.412690676Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c3ed5b81-fc2f-4d61-8a62-cf045b12cb76 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.412785643Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:52:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a.scope. Feb 23 17:52:52 ip-10-0-136-68 systemd[1]: Started libcontainer container f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a. Feb 23 17:52:52 ip-10-0-136-68 conmon[3875]: conmon f544215d4fe28d4e76e2 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:52:52 ip-10-0-136-68 systemd[1]: crio-conmon-f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a.scope: Deactivated successfully. Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.535348739Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=da1a636d-9171-4f1f-ac85-3a650aa73463 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.542731934Z" level=info msg="Created container f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c3ed5b81-fc2f-4d61-8a62-cf045b12cb76 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.543065381Z" level=info msg="Starting container: f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a" id=1ce63454-399a-486b-bd0d-4b3f23c051ba name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:52:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:52.550089195Z" level=info msg="Started container" PID=3887 containerID=f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=1ce63454-399a-486b-bd0d-4b3f23c051ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:52:52 ip-10-0-136-68 systemd[1]: crio-f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a.scope: Deactivated successfully. Feb 23 17:52:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:56.284086472Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=78e47bb7-e1ce-4528-869b-f5cd6c8e606e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:52:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:56.284491 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868" exitCode=-1 Feb 23 17:52:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:56.284566 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868} Feb 23 17:52:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:56.284715 2199 scope.go:115] "RemoveContainer" containerID="24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47" Feb 23 17:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:56.292719 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:56.293026 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:56.293269 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:52:56.293299 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:52:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:52:58.216500 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:52:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:58.216917474Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:52:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:52:58.220968902Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:53:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:00.045161400Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=2d034672-235b-4a00-a02a-db25f2351226 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:53:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:01.038691921Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=8ab384e5-f142-493e-9ce9-9280b1f42f8d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:53:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:02.713474406Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 17:53:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:02.713944347Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f8902cac-3c48-4022-afd8-83f7301c23a2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:53:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:02.713976524Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:53:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:03.793977963Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=3c7589dd-3e2a-44b2-9558-2b15651f76e1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:53:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:03.794560759Z" level=info msg="Removing container: 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47" id=b32c89bb-cf09-4450-9c92-07c1c4e0f570 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:53:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:03.839074535Z" level=info msg="cleanup sandbox network" Feb 23 17:53:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:04.777035414Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=f7513ca3-b8f5-4dda-9066-5aa873d7d27d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:53:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:04.777441 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a} Feb 23 17:53:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:04.872891 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:53:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:04.872947 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:53:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:07.541982722Z" level=warning msg="Failed to find container exit file for 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: timed out waiting for the condition" id=b32c89bb-cf09-4450-9c92-07c1c4e0f570 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:53:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:07.565859862Z" level=info msg="Removed container 24d6f40fa5952a383b0081e0f6f24c6f42c173f3d9e2677394190a64d7632c47: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b32c89bb-cf09-4450-9c92-07c1c4e0f570 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:53:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:11.217288 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:11.217567 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:11.217812 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:11.217860 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:53:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:11.532955858Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=b65f1dda-d7ba-4b99-8355-c9b4345e6758 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:53:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:14.872573 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:53:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:14.872638 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:53:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:24.872166 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:53:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:24.872227 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:26.292477 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:26.292797 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:26.293010 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:26.293040 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.251951352Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.252025469Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.252087677Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.252115256Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 systemd[1]: run-utsns-c26e51eb\x2d2a98\x2d492c\x2da6f4\x2d331659bfc4d7.mount: Deactivated successfully. Feb 23 17:53:34 ip-10-0-136-68 systemd[1]: run-utsns-ac257a78\x2d0c78\x2d41e7\x2da754\x2de16f8a6c4721.mount: Deactivated successfully. Feb 23 17:53:34 ip-10-0-136-68 systemd[1]: run-ipcns-ac257a78\x2d0c78\x2d41e7\x2da754\x2de16f8a6c4721.mount: Deactivated successfully. Feb 23 17:53:34 ip-10-0-136-68 systemd[1]: run-ipcns-c26e51eb\x2d2a98\x2d492c\x2da6f4\x2d331659bfc4d7.mount: Deactivated successfully. Feb 23 17:53:34 ip-10-0-136-68 systemd[1]: run-netns-ac257a78\x2d0c78\x2d41e7\x2da754\x2de16f8a6c4721.mount: Deactivated successfully. Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.267340947Z" level=info msg="runSandbox: deleting pod ID 2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4 from idIndex" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.267387132Z" level=info msg="runSandbox: removing pod sandbox 2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.267446366Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.267468405Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 systemd[1]: run-netns-c26e51eb\x2d2a98\x2d492c\x2da6f4\x2d331659bfc4d7.mount: Deactivated successfully. Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.272309544Z" level=info msg="runSandbox: deleting pod ID 1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355 from idIndex" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.272339816Z" level=info msg="runSandbox: removing pod sandbox 1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.272374115Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.272394104Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.277299990Z" level=info msg="runSandbox: removing pod sandbox from storage: 2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.278995884Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.279028712Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d7e9476d-6618-4803-af7c-a147cd8ccc5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:34.279268 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:34.279342 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:34.279379 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:34.279466 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.279326539Z" level=info msg="runSandbox: removing pod sandbox from storage: 1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.280737844Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:34.280761961Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1464a123-fbe2-485c-bb82-4125dbc3ddb9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:34.280910 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:34.280959 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:34.280982 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:34.281036 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:34.872369 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:53:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:34.872433 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.247624410Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.247669421Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 systemd[1]: run-utsns-ddd4d8ad\x2d53e5\x2d46b8\x2da8e0\x2d1a456ce8a29f.mount: Deactivated successfully. Feb 23 17:53:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1284590c2bb683532bf01632834259e149111a987654d413c7cd893970d9a355-userdata-shm.mount: Deactivated successfully. Feb 23 17:53:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2b979ab1c4d6158e04831ef7d816b98abc66c72b73d392ecbecdfd214f4148b4-userdata-shm.mount: Deactivated successfully. Feb 23 17:53:35 ip-10-0-136-68 systemd[1]: run-ipcns-ddd4d8ad\x2d53e5\x2d46b8\x2da8e0\x2d1a456ce8a29f.mount: Deactivated successfully. Feb 23 17:53:35 ip-10-0-136-68 systemd[1]: run-netns-ddd4d8ad\x2d53e5\x2d46b8\x2da8e0\x2d1a456ce8a29f.mount: Deactivated successfully. Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.273345163Z" level=info msg="runSandbox: deleting pod ID fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34 from idIndex" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.273381035Z" level=info msg="runSandbox: removing pod sandbox fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.273419856Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.273435557Z" level=info msg="runSandbox: unmounting shmPath for sandbox fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34-userdata-shm.mount: Deactivated successfully. Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.279308958Z" level=info msg="runSandbox: removing pod sandbox from storage: fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.280843278Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:35.280872974Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=32f85755-c8cd-4488-8356-0553818f9498 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:35.281029 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:53:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:35.281093 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:53:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:35.281148 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:53:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:35.281225 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(fb58590d8f94d5363d5ca40784ee53e3099f466c10b62f5e76e7320374622d34): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.245827050Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.245881518Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 systemd[1]: run-utsns-e4a004cc\x2d24cf\x2d4792\x2dbaf5\x2d56e321692991.mount: Deactivated successfully. Feb 23 17:53:37 ip-10-0-136-68 systemd[1]: run-ipcns-e4a004cc\x2d24cf\x2d4792\x2dbaf5\x2d56e321692991.mount: Deactivated successfully. Feb 23 17:53:37 ip-10-0-136-68 systemd[1]: run-netns-e4a004cc\x2d24cf\x2d4792\x2dbaf5\x2d56e321692991.mount: Deactivated successfully. Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.264334686Z" level=info msg="runSandbox: deleting pod ID b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46 from idIndex" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.264375440Z" level=info msg="runSandbox: removing pod sandbox b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.264419323Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.264438372Z" level=info msg="runSandbox: unmounting shmPath for sandbox b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46-userdata-shm.mount: Deactivated successfully. Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.270303678Z" level=info msg="runSandbox: removing pod sandbox from storage: b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.271738638Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:37.271766896Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=02de0846-77c6-43e0-a24b-d047a2a37589 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:37.271981 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:53:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:37.272033 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:53:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:37.272064 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:53:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:37.272128 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b874fc61f97281a5ba9d33ed9ac7e035fcf48b8d8b59e7dbf9344f865bf21a46): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 17:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:44.872559 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:44.872622 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:44.872649 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:44.873169 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 17:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:44.873363 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a" gracePeriod=30 Feb 23 17:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:44.873600401Z" level=info msg="Stopping container: f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a (timeout: 30s)" id=e6973562-0d34-4bcd-a941-b1f4e4f28116 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:53:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:47.725735008Z" level=info msg="NetworkStart: stopping network for sandbox 87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:47.725812735Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:53:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:47.725975179Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:53:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:48.217484 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:53:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:48.217594 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:53:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:48.217675 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.218535539Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.218605479Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.218678307Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.218733142Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.218607078Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.219010913Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.227908555Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c04f5f0c-3ac3-42d3-aeab-d162f7da74d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.227946068Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.227909485Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/80252fe5-1a94-4d92-a962-583b6f9cd9cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.228126873Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.228523647Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/9d4571f7-e645-437c-9438-cb47bdae97b5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.228554330Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:48.635054894Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=e6973562-0d34-4bcd-a941-b1f4e4f28116 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:53:48 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-cd28ae10945959186934f58ab592a00541465eaf377588e683203b57bf89a212-merged.mount: Deactivated successfully. Feb 23 17:53:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:49.217062 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 17:53:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:49.217451019Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:53:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:49.217520265Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:53:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:49.223169595Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2f092df5-9901-40a8-8040-987d222899f4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:53:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:49.223226677Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.418144600Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=e6973562-0d34-4bcd-a941-b1f4e4f28116 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.422096669Z" level=info msg="Stopped container f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=e6973562-0d34-4bcd-a941-b1f4e4f28116 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.422850495Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=6c5a95e6-083f-467e-9d27-518a334af5ef name=/runtime.v1.ImageService/ImageStatus Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.423037825Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6c5a95e6-083f-467e-9d27-518a334af5ef name=/runtime.v1.ImageService/ImageStatus Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.423635717Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=895180a5-e670-42eb-b99d-8ad1a1ecf043 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.423798829Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=895180a5-e670-42eb-b99d-8ad1a1ecf043 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.424489729Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=997a9b0b-5ac5-4b9c-bd52-1f5b1eea8a06 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.424584641Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:53:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936.scope. Feb 23 17:53:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936. Feb 23 17:53:52 ip-10-0-136-68 conmon[4117]: conmon 8e00b61f6b420c6a4ffb : Failed to write to cgroup.event_control Operation not supported Feb 23 17:53:52 ip-10-0-136-68 systemd[1]: crio-conmon-8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936.scope: Deactivated successfully. Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.558351365Z" level=info msg="Created container 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=997a9b0b-5ac5-4b9c-bd52-1f5b1eea8a06 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.558760659Z" level=info msg="Starting container: 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936" id=ee28b5cb-69ba-41f0-bb9b-a234aec1113b name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:52.565746664Z" level=info msg="Started container" PID=4129 containerID=8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=ee28b5cb-69ba-41f0-bb9b-a234aec1113b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:53:52 ip-10-0-136-68 systemd[1]: crio-8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936.scope: Deactivated successfully. Feb 23 17:53:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:53.330437030Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=04414b30-7047-4f1c-95a7-b0c89fceba35 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:56.292048 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:56.292343 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:56.292583 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:53:56.292615 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:53:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:53:57.078166840Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=e887f961-4937-4798-ac44-25006532e8db name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:53:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:57.079292 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a" exitCode=-1 Feb 23 17:53:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:57.079329 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a} Feb 23 17:53:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:53:57.079361 2199 scope.go:115] "RemoveContainer" containerID="f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868" Feb 23 17:54:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:00.840081772Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=15bfa9c0-25a1-4988-bf0b-9996c2bdb267 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:54:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:01.831144519Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=6a324106-fb0e-4a1c-8058-88fd7755d031 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:03.840511232Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 17:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:03.840586072Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f8902cac-3c48-4022-afd8-83f7301c23a2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:03.840649479Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:03.840660762Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:03.840672754Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:54:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:04.591006900Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=a964f444-81a0-4914-9b1b-c1f534612ba4 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:54:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:04.591632971Z" level=info msg="Removing container: f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868" id=0ea775f5-cc57-47ab-b1d2-f9e9204bbdf4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:54:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:05.528677990Z" level=info msg="cleanup sandbox network" Feb 23 17:54:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:05.568798282Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=30697820-7bb4-48a5-b689-604feabcb823 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:54:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:05.569703 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936} Feb 23 17:54:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:08.353027713Z" level=warning msg="Failed to find container exit file for f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: timed out waiting for the condition" id=0ea775f5-cc57-47ab-b1d2-f9e9204bbdf4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:54:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:08.377017345Z" level=info msg="Removed container f37629ca906e242a557b75758ea21b4db1c4e700015a44ab52e5e87a51a75868: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=0ea775f5-cc57-47ab-b1d2-f9e9204bbdf4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:54:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:12.326291082Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=e75aa11c-365c-49ab-ae56-c2388b49cddc name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:54:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:14.872304 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:54:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:14.872367 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:54:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:24.217403 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:24.217749 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:24.218043 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:24.218076 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:54:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:24.872631 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:54:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:24.872688 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:26.291804 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:26.292055 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:26.292308 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:26.292337 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.244125665Z" level=info msg="NetworkStart: stopping network for sandbox 05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.244299310Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c04f5f0c-3ac3-42d3-aeab-d162f7da74d0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.244344768Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.244357352Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.244367938Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.245347989Z" level=info msg="NetworkStart: stopping network for sandbox 56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.245451101Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/9d4571f7-e645-437c-9438-cb47bdae97b5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.245490644Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.245503437Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.245513953Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.246622442Z" level=info msg="NetworkStart: stopping network for sandbox bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.246790169Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/80252fe5-1a94-4d92-a962-583b6f9cd9cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.246832295Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.246843767Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:33.246856018Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:54:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:34.234778648Z" level=info msg="NetworkStart: stopping network for sandbox 3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:34.234895377Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2f092df5-9901-40a8-8040-987d222899f4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:54:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:34.234923109Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:54:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:34.234931346Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:54:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:34.234937507Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:54:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:34.872899 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:54:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:34.872962 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:54:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:44.872835 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:54:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:44.872899 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.727061102Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.727112220Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.727143246Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.727321751Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:54:47 ip-10-0-136-68 systemd[1]: run-utsns-f8902cac\x2d3c48\x2d4022\x2dafd8\x2d83f7301c23a2.mount: Deactivated successfully. Feb 23 17:54:47 ip-10-0-136-68 systemd[1]: run-ipcns-f8902cac\x2d3c48\x2d4022\x2dafd8\x2d83f7301c23a2.mount: Deactivated successfully. Feb 23 17:54:47 ip-10-0-136-68 systemd[1]: run-netns-f8902cac\x2d3c48\x2d4022\x2dafd8\x2d83f7301c23a2.mount: Deactivated successfully. Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.747338331Z" level=info msg="runSandbox: deleting pod ID 87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5 from idIndex" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.747378822Z" level=info msg="runSandbox: removing pod sandbox 87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.747413646Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.747435420Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5-userdata-shm.mount: Deactivated successfully. Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.753322563Z" level=info msg="runSandbox: removing pod sandbox from storage: 87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.754984448Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:47.755015841Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f021144b-18cb-4a7e-b808-573d201ec227 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:54:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:47.755204 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:54:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:47.755296 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:54:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:47.755349 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:54:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:47.755436 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(87e6425b30579a1cbd451c1901a1fd353aefbe97c6161d70bf0a1fbe6a10e3c5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 17:54:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:54.872111 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:54:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:54.872174 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:54:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:54.872202 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:54:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:54.872727 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 17:54:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:54:54.872886 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936" gracePeriod=30 Feb 23 17:54:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:54.873121540Z" level=info msg="Stopping container: 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936 (timeout: 30s)" id=2ad5615a-cd33-4e4e-8624-1bcd4f70a97d name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:56.292623 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:56.293081 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:56.293339 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:54:56.293360 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:54:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:54:58.636068806Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=2ad5615a-cd33-4e4e-8624-1bcd4f70a97d name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:54:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-333b053cc835e987c040121844a7dbfa14f818def2e8bc730f54497ab0d83719-merged.mount: Deactivated successfully. Feb 23 17:55:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:02.217221 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.217708664Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.217779897Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.406118384Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=2ad5615a-cd33-4e4e-8624-1bcd4f70a97d name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.408354010Z" level=info msg="Stopped container 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=2ad5615a-cd33-4e4e-8624-1bcd4f70a97d name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.409126285Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=66ad6a21-fe24-4703-adb0-9bc07aa0af71 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.409324992Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=66ad6a21-fe24-4703-adb0-9bc07aa0af71 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.409927965Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=4c8d49a2-3ef1-4745-9967-a8fe10f87807 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.410079029Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=4c8d49a2-3ef1-4745-9967-a8fe10f87807 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.410698195Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=58350a5b-e60a-4a38-9cbc-8d3763cc0f34 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.410800739Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:55:02 ip-10-0-136-68 systemd[1]: Started crio-conmon-19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1.scope. Feb 23 17:55:02 ip-10-0-136-68 systemd[1]: Started libcontainer container 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1. Feb 23 17:55:02 ip-10-0-136-68 conmon[4371]: conmon 19de5d33c85c1857e966 : Failed to write to cgroup.event_control Operation not supported Feb 23 17:55:02 ip-10-0-136-68 systemd[1]: crio-conmon-19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1.scope: Deactivated successfully. Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.547520839Z" level=info msg="Created container 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=58350a5b-e60a-4a38-9cbc-8d3763cc0f34 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.547962190Z" level=info msg="Starting container: 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1" id=626ee96b-bdea-4605-969e-cf99eae6c596 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:55:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:02.555294814Z" level=info msg="Started container" PID=4382 containerID=19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=626ee96b-bdea-4605-969e-cf99eae6c596 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:55:02 ip-10-0-136-68 systemd[1]: crio-19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1.scope: Deactivated successfully. Feb 23 17:55:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:03.150988134Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=883acf0f-1d46-4844-9297-0d7b49987b8c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:55:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:05.529931802Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 17:55:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:05.530634434Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/0270666e-47e7-4909-9340-402c2c35b066 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:55:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:05.530661407Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:55:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:06.899953210Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=98ca2207-5bfc-4fa8-857c-fcfc8b844347 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:55:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:06.900959 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936" exitCode=-1 Feb 23 17:55:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:06.900969 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936} Feb 23 17:55:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:06.901012 2199 scope.go:115] "RemoveContainer" containerID="f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a" Feb 23 17:55:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:08.062142899Z" level=info msg="cleanup sandbox network" Feb 23 17:55:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:10.651304579Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=8dd9860a-cca4-4b39-a51f-1e90a3d1ae8b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:55:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:11.664136112Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=aca2a5e6-1274-4fb0-8274-59cc4cd0fb4f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:55:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:14.401117436Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=1f8e5200-c46f-42a2-8843-80b56aa940c7 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:55:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:14.401693147Z" level=info msg="Removing container: f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a" id=336549c1-e833-4bdd-a4df-72607a81fb93 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:15.425175688Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=0f4201ec-aed8-4d9c-9a59-ef813ef06389 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:55:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:15.426171 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1} Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.151318949Z" level=warning msg="Failed to find container exit file for f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: timed out waiting for the condition" id=336549c1-e833-4bdd-a4df-72607a81fb93 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.174804662Z" level=info msg="Removed container f544215d4fe28d4e76e266cb38a7a5fae39c8ca3808f88dc85221a9ae201ca2a: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=336549c1-e833-4bdd-a4df-72607a81fb93 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.255237412Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.255304267Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.257391779Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.257429058Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.257672419Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.257721411Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 systemd[1]: run-utsns-9d4571f7\x2de645\x2d437c\x2d9438\x2dcb47bdae97b5.mount: Deactivated successfully. Feb 23 17:55:18 ip-10-0-136-68 systemd[1]: run-utsns-80252fe5\x2d1a94\x2d4d92\x2da962\x2d583b6f9cd9cc.mount: Deactivated successfully. Feb 23 17:55:18 ip-10-0-136-68 systemd[1]: run-utsns-c04f5f0c\x2d3ac3\x2d42d3\x2daeab\x2dd162f7da74d0.mount: Deactivated successfully. Feb 23 17:55:18 ip-10-0-136-68 systemd[1]: run-ipcns-9d4571f7\x2de645\x2d437c\x2d9438\x2dcb47bdae97b5.mount: Deactivated successfully. Feb 23 17:55:18 ip-10-0-136-68 systemd[1]: run-ipcns-c04f5f0c\x2d3ac3\x2d42d3\x2daeab\x2dd162f7da74d0.mount: Deactivated successfully. Feb 23 17:55:18 ip-10-0-136-68 systemd[1]: run-ipcns-80252fe5\x2d1a94\x2d4d92\x2da962\x2d583b6f9cd9cc.mount: Deactivated successfully. Feb 23 17:55:18 ip-10-0-136-68 systemd[1]: run-netns-9d4571f7\x2de645\x2d437c\x2d9438\x2dcb47bdae97b5.mount: Deactivated successfully. Feb 23 17:55:18 ip-10-0-136-68 systemd[1]: run-netns-c04f5f0c\x2d3ac3\x2d42d3\x2daeab\x2dd162f7da74d0.mount: Deactivated successfully. Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.275310957Z" level=info msg="runSandbox: deleting pod ID 05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c from idIndex" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.275346376Z" level=info msg="runSandbox: removing pod sandbox 05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.275380295Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.275404386Z" level=info msg="runSandbox: unmounting shmPath for sandbox 05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.278314470Z" level=info msg="runSandbox: deleting pod ID 56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a from idIndex" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.278339794Z" level=info msg="runSandbox: removing pod sandbox 56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.278372025Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.278392583Z" level=info msg="runSandbox: unmounting shmPath for sandbox 56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.278340419Z" level=info msg="runSandbox: deleting pod ID bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2 from idIndex" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.278464203Z" level=info msg="runSandbox: removing pod sandbox bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.278486323Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.278504177Z" level=info msg="runSandbox: unmounting shmPath for sandbox bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.286293684Z" level=info msg="runSandbox: removing pod sandbox from storage: 05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.287742413Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.287774012Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4649c4b9-e71f-48c8-808c-9e92fb01b1dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.287984 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.288044 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.288069 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.288123 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.290308642Z" level=info msg="runSandbox: removing pod sandbox from storage: 56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.290309556Z" level=info msg="runSandbox: removing pod sandbox from storage: bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.291707102Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.291732621Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=c5be7d59-00e0-4ca2-adca-22320e696476 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.291898 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.291958 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.291995 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.292059 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.293136455Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:18.293162872Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=b37583dd-d931-4227-a400-35bc3daa8b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.293331 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.293369 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.293396 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:18.293445 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.245174764Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.245217890Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 systemd[1]: run-netns-2f092df5\x2d9901\x2d40a8\x2d8040\x2d987d222899f4.mount: Deactivated successfully. Feb 23 17:55:19 ip-10-0-136-68 systemd[1]: run-ipcns-2f092df5\x2d9901\x2d40a8\x2d8040\x2d987d222899f4.mount: Deactivated successfully. Feb 23 17:55:19 ip-10-0-136-68 systemd[1]: run-utsns-2f092df5\x2d9901\x2d40a8\x2d8040\x2d987d222899f4.mount: Deactivated successfully. Feb 23 17:55:19 ip-10-0-136-68 systemd[1]: run-netns-80252fe5\x2d1a94\x2d4d92\x2da962\x2d583b6f9cd9cc.mount: Deactivated successfully. Feb 23 17:55:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bb4b48771f37ecb27ff43784f3b3fc9778e3a0b5eafc64abd5e7b2e6bcbf98b2-userdata-shm.mount: Deactivated successfully. Feb 23 17:55:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-56b0e8a42562c8bb044518c574fba332555ea488177c88f81ff804a94144593a-userdata-shm.mount: Deactivated successfully. Feb 23 17:55:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-05692da462de5153490575e7051ce8192d941d7e34bb6fe5798f6fabfe33668c-userdata-shm.mount: Deactivated successfully. Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.263339884Z" level=info msg="runSandbox: deleting pod ID 3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a from idIndex" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.263381072Z" level=info msg="runSandbox: removing pod sandbox 3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.263410084Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.263424633Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a-userdata-shm.mount: Deactivated successfully. Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.267318253Z" level=info msg="runSandbox: removing pod sandbox from storage: 3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.268853627Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:19.268882429Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=37933e70-77d1-40d1-a4c2-d2e21e25fb0b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:19.269098 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:55:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:19.269162 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:55:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:19.269190 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:55:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:19.269268 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3f3bddb46b3ba200aeabdad1eb1f97f0d3815b4dbfe22e2b99a9ee38d42db44a): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 17:55:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:20.003180 2199 kubelet.go:1409] "Image garbage collection succeeded" Feb 23 17:55:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:20.169466730Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=79e5758b-31b8-4bcd-a8cf-774989c41678 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:55:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:20.169644169Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=79e5758b-31b8-4bcd-a8cf-774989c41678 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:55:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:22.193936020Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=fddefa74-a334-4e12-ba1a-b1e8278823b5 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:55:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:24.872389 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:55:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:24.872443 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:26.292459 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:26.292698 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:26.292891 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:26.292933 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:55:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:29.216992 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:29.217289454Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:29.217357266Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:29.222541241Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/29020238-6653-44dc-8e1c-e0dfdd2d193d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:29.222564699Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:55:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:30.217491 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:55:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:30.218450 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:30.217805036Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:30.217856709Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:30.218680434Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:30.218740182Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:30.226345473Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/a441d524-b242-43d1-a893-5d03ecc422cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:30.226374728Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/9dd0a535-de6d-41c4-891d-0b824b43cbb8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:30.226408719Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:30.226377784Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:55:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:31.217335 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:31.217685 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:31.217971 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:31.218000 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:55:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:32.216690 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 17:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:32.217063144Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:32.217131199Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:32.222980279Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/62c3e258-7af1-4160-8190-2441d9d38c06 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:32.223010150Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:55:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:34.872925 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:55:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:34.872996 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:55:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:44.872383 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:55:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:44.872444 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:55:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:50.542075887Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:55:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:50.542084665Z" level=info msg="NetworkStart: stopping network for sandbox 251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:55:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:55:50.542312742Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:55:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:54.872678 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:55:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:55:54.872749 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:56.291630 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:56.291974 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:56.292150 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:55:56.292186 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:56:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:04.872970 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:56:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:04.873036 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:56:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:04.873068 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:56:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:04.873717 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 17:56:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:04.873925 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1" gracePeriod=30 Feb 23 17:56:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:04.874169251Z" level=info msg="Stopping container: 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1 (timeout: 30s)" id=a6c71492-aa67-4a16-9b9e-70851f9df945 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:56:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:08.063295200Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 17:56:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:08.063351406Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/0270666e-47e7-4909-9340-402c2c35b066 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:56:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:08.063402318Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:56:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:08.063410482Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:56:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:08.063418651Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:56:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:08.635937923Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=a6c71492-aa67-4a16-9b9e-70851f9df945 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:56:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6736895dd9d2c9fb14f7f4c750e11f7476dcfa901f2fbf9d3a8cfae2cc6edfd6-merged.mount: Deactivated successfully. Feb 23 17:56:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:11.860315913Z" level=info msg="cleanup sandbox network" Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.409941325Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=a6c71492-aa67-4a16-9b9e-70851f9df945 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.411313649Z" level=info msg="Stopped container 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=a6c71492-aa67-4a16-9b9e-70851f9df945 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.411960424Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=5c341c61-2715-4996-96bb-8a00295d530d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.412135006Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=5c341c61-2715-4996-96bb-8a00295d530d name=/runtime.v1.ImageService/ImageStatus Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.412675978Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=d8c78b03-925e-496f-8fee-274e101aea1e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.412820116Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d8c78b03-925e-496f-8fee-274e101aea1e name=/runtime.v1.ImageService/ImageStatus Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.413417897Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b0f03eb1-a360-4570-b573-864ff95ce06d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.413510705Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:56:12 ip-10-0-136-68 systemd[1]: Started crio-conmon-97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992.scope. Feb 23 17:56:12 ip-10-0-136-68 systemd[1]: Started libcontainer container 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992. Feb 23 17:56:12 ip-10-0-136-68 conmon[4616]: conmon 97f95a5f7e54231d4a0b : Failed to write to cgroup.event_control Operation not supported Feb 23 17:56:12 ip-10-0-136-68 systemd[1]: crio-conmon-97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992.scope: Deactivated successfully. Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.545562691Z" level=info msg="Created container 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b0f03eb1-a360-4570-b573-864ff95ce06d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.545993618Z" level=info msg="Starting container: 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" id=f60a6b47-eb3f-4c21-bdc1-db24f06f3f5b name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:56:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:12.552821490Z" level=info msg="Started container" PID=4628 containerID=97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=f60a6b47-eb3f-4c21-bdc1-db24f06f3f5b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:56:12 ip-10-0-136-68 systemd[1]: crio-97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992.scope: Deactivated successfully. Feb 23 17:56:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:13.005067392Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=00955e97-daa4-4323-9a97-b5dcd233cac7 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:14.233555511Z" level=info msg="NetworkStart: stopping network for sandbox 46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:14.233676491Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/29020238-6653-44dc-8e1c-e0dfdd2d193d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:14.233714826Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:14.233727231Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:14.233736708Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.240873122Z" level=info msg="NetworkStart: stopping network for sandbox da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.241032127Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/9dd0a535-de6d-41c4-891d-0b824b43cbb8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.241073545Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.241084797Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.241095049Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.242116326Z" level=info msg="NetworkStart: stopping network for sandbox 19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.242209189Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/a441d524-b242-43d1-a893-5d03ecc422cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.242237651Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.242270545Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:15.242280537Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:56:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:16.753969228Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=b6b06684-a688-47f1-873f-7ca203dc329d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:56:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:16.754932 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1" exitCode=-1 Feb 23 17:56:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:16.754990 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1} Feb 23 17:56:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:16.755035 2199 scope.go:115] "RemoveContainer" containerID="8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936" Feb 23 17:56:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:17.234716617Z" level=info msg="NetworkStart: stopping network for sandbox 614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:17.234853535Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/62c3e258-7af1-4160-8190-2441d9d38c06 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:56:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:17.234890202Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:56:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:17.234900483Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:56:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:17.234909384Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:56:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:20.504036490Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=65648cad-671c-4715-a5d1-077052bcd73f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:56:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:21.520632515Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=1dca0778-36d9-487b-8601-4aad9e3e8a43 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:56:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:24.254155895Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=1202e401-4e6d-4caf-b19a-c432e8bc5880 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:56:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:24.254707622Z" level=info msg="Removing container: 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936" id=96b0cbe5-c56f-4687-a067-88c15a0c46f5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:56:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:25.272158189Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=67c020a2-edd0-4993-b8bb-4cd5bbe7dc15 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:56:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:25.273191 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992} Feb 23 17:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:26.292586 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:26.292833 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:26.293034 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:26.293063 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:56:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:28.013948668Z" level=warning msg="Failed to find container exit file for 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: timed out waiting for the condition" id=96b0cbe5-c56f-4687-a067-88c15a0c46f5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:56:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:28.026464047Z" level=info msg="Removed container 8e00b61f6b420c6a4ffb241cb84da91094d7dbb659894173f86d1088e6525936: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=96b0cbe5-c56f-4687-a067-88c15a0c46f5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:56:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:32.039051021Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=57af9877-1648-40ba-ad89-1cf2d77ec569 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:56:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:34.872188 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:56:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:34.872296 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:56:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:36.217170 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:36.217478 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:36.217724 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:36.217756 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:56:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:44.872072 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:56:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:44.872134 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.543114869Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.543392696Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.543209277Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.543633039Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:56:50 ip-10-0-136-68 systemd[1]: run-utsns-0270666e\x2d47e7\x2d4909\x2d9340\x2d402c2c35b066.mount: Deactivated successfully. Feb 23 17:56:50 ip-10-0-136-68 systemd[1]: run-ipcns-0270666e\x2d47e7\x2d4909\x2d9340\x2d402c2c35b066.mount: Deactivated successfully. Feb 23 17:56:50 ip-10-0-136-68 systemd[1]: run-netns-0270666e\x2d47e7\x2d4909\x2d9340\x2d402c2c35b066.mount: Deactivated successfully. Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.558353612Z" level=info msg="runSandbox: deleting pod ID 251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb from idIndex" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.558394277Z" level=info msg="runSandbox: removing pod sandbox 251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.558424081Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.558438902Z" level=info msg="runSandbox: unmounting shmPath for sandbox 251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb-userdata-shm.mount: Deactivated successfully. Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.563327062Z" level=info msg="runSandbox: removing pod sandbox from storage: 251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.565139710Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:50.565171088Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f64bb5cf-38b8-4423-86e0-dfdf57c35c31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:50.565413 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:56:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:50.565481 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:56:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:50.565503 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:56:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:50.565563 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(251c9586491a05f3e474b67aa8e2c9a47759c5a6218fa2bff08b4871d9c817fb): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 17:56:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:54.873067 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:56:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:56:54.873132 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:56.292295 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:56.292560 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:56.292790 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:56.292827 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.243961551Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.244015995Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 systemd[1]: run-utsns-29020238\x2d6653\x2d44dc\x2d8e1c\x2de0dfdd2d193d.mount: Deactivated successfully. Feb 23 17:56:59 ip-10-0-136-68 systemd[1]: run-ipcns-29020238\x2d6653\x2d44dc\x2d8e1c\x2de0dfdd2d193d.mount: Deactivated successfully. Feb 23 17:56:59 ip-10-0-136-68 systemd[1]: run-netns-29020238\x2d6653\x2d44dc\x2d8e1c\x2de0dfdd2d193d.mount: Deactivated successfully. Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.267338118Z" level=info msg="runSandbox: deleting pod ID 46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65 from idIndex" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.267380269Z" level=info msg="runSandbox: removing pod sandbox 46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.267410177Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.267422467Z" level=info msg="runSandbox: unmounting shmPath for sandbox 46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65-userdata-shm.mount: Deactivated successfully. Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.274317436Z" level=info msg="runSandbox: removing pod sandbox from storage: 46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.276043776Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:56:59.276076323Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=55871494-7e74-462c-8dc1-8278b4773bec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:59.276310 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:59.276362 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:59.276387 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:56:59.276444 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(46d959821ca43faab73fa404b5aa0d797faadaeae4970e4fa9188ad87dbf6f65): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.250271988Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.250325835Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.252504505Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.252556763Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 systemd[1]: run-utsns-9dd0a535\x2dde6d\x2d41c4\x2d891d\x2d0b824b43cbb8.mount: Deactivated successfully. Feb 23 17:57:00 ip-10-0-136-68 systemd[1]: run-utsns-a441d524\x2db242\x2d43d1\x2da893\x2d5d03ecc422cc.mount: Deactivated successfully. Feb 23 17:57:00 ip-10-0-136-68 systemd[1]: run-ipcns-9dd0a535\x2dde6d\x2d41c4\x2d891d\x2d0b824b43cbb8.mount: Deactivated successfully. Feb 23 17:57:00 ip-10-0-136-68 systemd[1]: run-ipcns-a441d524\x2db242\x2d43d1\x2da893\x2d5d03ecc422cc.mount: Deactivated successfully. Feb 23 17:57:00 ip-10-0-136-68 systemd[1]: run-netns-a441d524\x2db242\x2d43d1\x2da893\x2d5d03ecc422cc.mount: Deactivated successfully. Feb 23 17:57:00 ip-10-0-136-68 systemd[1]: run-netns-9dd0a535\x2dde6d\x2d41c4\x2d891d\x2d0b824b43cbb8.mount: Deactivated successfully. Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.272333041Z" level=info msg="runSandbox: deleting pod ID da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c from idIndex" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.272375598Z" level=info msg="runSandbox: removing pod sandbox da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.272410426Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.272425063Z" level=info msg="runSandbox: unmounting shmPath for sandbox da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.272333128Z" level=info msg="runSandbox: deleting pod ID 19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4 from idIndex" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.272475523Z" level=info msg="runSandbox: removing pod sandbox 19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.272492732Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.272504835Z" level=info msg="runSandbox: unmounting shmPath for sandbox 19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.279294709Z" level=info msg="runSandbox: removing pod sandbox from storage: 19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.279313824Z" level=info msg="runSandbox: removing pod sandbox from storage: da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.281016138Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.281045648Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=8faabc0d-f1fb-4f94-9fb7-2c8251f1bb8b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:00.281354 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:00.281473 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:00.281511 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:00.281590 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.282678930Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:00.282710575Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=c47a42f0-c225-4ae2-8973-d29701f2d24b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:00.282863 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:00.282910 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:00.282930 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:00.282992 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 17:57:01 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-19f341e17755d38f7a6594d80a859e350979eb3457a1a719def0ac82a3bad5b4-userdata-shm.mount: Deactivated successfully. Feb 23 17:57:01 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-da1a44b469cb1425dd74a07158f998772c42f70c61e114c796da38fe772d6e7c-userdata-shm.mount: Deactivated successfully. Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.245377570Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.245460986Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 systemd[1]: run-utsns-62c3e258\x2d7af1\x2d4160\x2d8190\x2d2441d9d38c06.mount: Deactivated successfully. Feb 23 17:57:02 ip-10-0-136-68 systemd[1]: run-ipcns-62c3e258\x2d7af1\x2d4160\x2d8190\x2d2441d9d38c06.mount: Deactivated successfully. Feb 23 17:57:02 ip-10-0-136-68 systemd[1]: run-netns-62c3e258\x2d7af1\x2d4160\x2d8190\x2d2441d9d38c06.mount: Deactivated successfully. Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.273347560Z" level=info msg="runSandbox: deleting pod ID 614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745 from idIndex" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.273397944Z" level=info msg="runSandbox: removing pod sandbox 614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.273454065Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.273473542Z" level=info msg="runSandbox: unmounting shmPath for sandbox 614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745-userdata-shm.mount: Deactivated successfully. Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.280334681Z" level=info msg="runSandbox: removing pod sandbox from storage: 614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.281977514Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:02.282014302Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=656f4820-98e7-44c0-aa43-e6afeabb51f4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:02.282293 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:57:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:02.282359 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:57:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:02.282384 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:57:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:02.282436 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(614ec55053185ab2e413e6e1073e166a13880a695936f1d50a484f1d59837745): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 17:57:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:04.872837 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:57:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:04.872898 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:57:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:05.216608 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:57:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:05.216945196Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:05.217006412Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:11.861457668Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 17:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:11.862493482Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/3fcc60b0-e110-499d-bf5a-ed441abf9fe1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:11.862521605Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:12.216727 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:12.217162431Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:12.217226780Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:12.222746417Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5c099f5b-dde3-4876-a25c-83f6f9fe32f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:12.222772330Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:14.217177 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 17:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:14.217230 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:14.217330 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.217775760Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.218005204Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.217889822Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.218118642Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.217933144Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.218312174Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.227226220Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4ba9b073-4ef9-4219-8526-0d6732e42f4f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.227310608Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.227278209Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/3052fef8-5c07-4001-85ca-c15dd2d71f43 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.227585516Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.227616063Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/72a5e461-4c92-48b0-8260-31ea857f15d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.228007399Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:14.872543 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:14.872608 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:14.872637 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:14.873146 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 17:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:14.873337 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" gracePeriod=30 Feb 23 17:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:14.873603094Z" level=info msg="Stopping container: 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992 (timeout: 30s)" id=83657835-c8e9-4ec2-812b-7044399f99e9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:57:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:17.557685118Z" level=info msg="cleanup sandbox network" Feb 23 17:57:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:18.632952254Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=83657835-c8e9-4ec2-812b-7044399f99e9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:57:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e1359c4a2c400b984c74bf24c69d45d0c1c679374b51efea9ead75713dcc3dc6-merged.mount: Deactivated successfully. Feb 23 17:57:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:22.413968160Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=83657835-c8e9-4ec2-812b-7044399f99e9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:57:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:22.417209187Z" level=info msg="Stopped container 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=83657835-c8e9-4ec2-812b-7044399f99e9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:57:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:22.417706 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:57:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:22.875048083Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=128a1157-52b9-4b3f-aac9-76ca8bf32a0d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:26.292294 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:26.292606 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:26.292827 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:26.292877 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:57:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:26.626447018Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=b62df5f2-1918-4741-9feb-16d97985f579 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:57:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:26.627362 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" exitCode=-1 Feb 23 17:57:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:26.627401 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992} Feb 23 17:57:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:26.627428 2199 scope.go:115] "RemoveContainer" containerID="19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1" Feb 23 17:57:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:27.629626 2199 scope.go:115] "RemoveContainer" containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" Feb 23 17:57:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:27.629991 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:57:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:30.376371378Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=cef2e6d2-0754-4612-930e-33c84e9a94a7 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:57:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:34.126061837Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=7004d679-72fe-4280-983b-592e3a60b72c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:57:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:34.126614547Z" level=info msg="Removing container: 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1" id=f3907c18-a966-4c8e-8f34-2b3375d2ef74 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:57:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:37.889709249Z" level=warning msg="Failed to find container exit file for 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: timed out waiting for the condition" id=f3907c18-a966-4c8e-8f34-2b3375d2ef74 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:57:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:37.914238974Z" level=info msg="Removed container 19de5d33c85c1857e96690d0ec27e835432571da314603c531b5477c7871c9d1: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=f3907c18-a966-4c8e-8f34-2b3375d2ef74 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:57:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:40.217330 2199 scope.go:115] "RemoveContainer" containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" Feb 23 17:57:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:40.218195 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:57:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:42.396061761Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=d7c5c451-fe1a-4e64-92a3-f28c594e1955 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:57:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:57:53.216910 2199 scope.go:115] "RemoveContainer" containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" Feb 23 17:57:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:53.217346 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:56.292612 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:56.292866 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:56.293139 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:57:56.293179 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:56.873733241Z" level=info msg="NetworkStart: stopping network for sandbox ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:56.873825609Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:56.874006104Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:57.234592253Z" level=info msg="NetworkStart: stopping network for sandbox bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:57.234721815Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5c099f5b-dde3-4876-a25c-83f6f9fe32f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:57.234749303Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:57:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:57.234758231Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:57:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:57.234765025Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.242739279Z" level=info msg="NetworkStart: stopping network for sandbox 38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.242888209Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/3052fef8-5c07-4001-85ca-c15dd2d71f43 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.242923558Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.242934770Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.242944258Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.243140180Z" level=info msg="NetworkStart: stopping network for sandbox 08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.243226040Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/72a5e461-4c92-48b0-8260-31ea857f15d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.243277355Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.243287943Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.243297389Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.244803566Z" level=info msg="NetworkStart: stopping network for sandbox a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.244904316Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4ba9b073-4ef9-4219-8526-0d6732e42f4f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.245050315Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.245072263Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:57:59.245082519Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:58:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:04.217067 2199 scope.go:115] "RemoveContainer" containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" Feb 23 17:58:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:04.218049 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:58:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:05.217496 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:05.218100 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:05.218396 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:05.218432 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:58:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:17.559398389Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 17:58:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:17.559464796Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/3fcc60b0-e110-499d-bf5a-ed441abf9fe1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:58:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:17.559527628Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:58:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:17.559536154Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:58:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:17.559543500Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:58:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:18.216763 2199 scope.go:115] "RemoveContainer" containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" Feb 23 17:58:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:18.217394 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:58:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:26.102582363Z" level=info msg="cleanup sandbox network" Feb 23 17:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:26.292297 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:26.292503 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:26.292773 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:26.292811 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:58:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:32.216987 2199 scope.go:115] "RemoveContainer" containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" Feb 23 17:58:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:32.217437 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.243726290Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.243768613Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 systemd[1]: run-utsns-5c099f5b\x2ddde3\x2d4876\x2da25c\x2d83f6f9fe32f6.mount: Deactivated successfully. Feb 23 17:58:42 ip-10-0-136-68 systemd[1]: run-ipcns-5c099f5b\x2ddde3\x2d4876\x2da25c\x2d83f6f9fe32f6.mount: Deactivated successfully. Feb 23 17:58:42 ip-10-0-136-68 systemd[1]: run-netns-5c099f5b\x2ddde3\x2d4876\x2da25c\x2d83f6f9fe32f6.mount: Deactivated successfully. Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.272317855Z" level=info msg="runSandbox: deleting pod ID bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a from idIndex" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.272361538Z" level=info msg="runSandbox: removing pod sandbox bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.272404762Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.272426107Z" level=info msg="runSandbox: unmounting shmPath for sandbox bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a-userdata-shm.mount: Deactivated successfully. Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.280294531Z" level=info msg="runSandbox: removing pod sandbox from storage: bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.281896022Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:42.281928045Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a500fdb3-9b91-415d-99c1-c014e973b5ad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:42.282110 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:58:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:42.282164 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:58:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:42.282188 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:58:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:42.282311 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bd0faacfcdde4a9a8db9a11dfa9319ed801aa72484cf5dd0e8864665ace7381a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 17:58:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:43.216889 2199 scope.go:115] "RemoveContainer" containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.217551408Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=52548cd7-451e-431f-98b1-5de8e6cd8292 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.217739411Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=52548cd7-451e-431f-98b1-5de8e6cd8292 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.218353961Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=8cee7d66-c33b-4ae2-98af-b0ef7ef3b691 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.218522868Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=8cee7d66-c33b-4ae2-98af-b0ef7ef3b691 name=/runtime.v1.ImageService/ImageStatus Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.219181853Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=91674fa1-8319-4ebd-8a58-583c173d51f0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.219291438Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:58:43 ip-10-0-136-68 systemd[1]: Started crio-conmon-ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a.scope. Feb 23 17:58:43 ip-10-0-136-68 systemd[1]: Started libcontainer container ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a. Feb 23 17:58:43 ip-10-0-136-68 conmon[5058]: conmon ba010c6ed03924d8b7cc : Failed to write to cgroup.event_control Operation not supported Feb 23 17:58:43 ip-10-0-136-68 systemd[1]: crio-conmon-ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a.scope: Deactivated successfully. Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.356522565Z" level=info msg="Created container ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=91674fa1-8319-4ebd-8a58-583c173d51f0 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.356958512Z" level=info msg="Starting container: ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" id=2065444a-32ab-4572-bb44-b5a2c4a06350 name=/runtime.v1.RuntimeService/StartContainer Feb 23 17:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:43.363895405Z" level=info msg="Started container" PID=5070 containerID=ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=2065444a-32ab-4572-bb44-b5a2c4a06350 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 17:58:43 ip-10-0-136-68 systemd[1]: crio-ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a.scope: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.253814432Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.253878202Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.255709540Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.255759300Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.255918827Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.255946034Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-utsns-3052fef8\x2d5c07\x2d4001\x2d85ca\x2dc15dd2d71f43.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-utsns-72a5e461\x2d4c92\x2d48b0\x2d8260\x2d31ea857f15d7.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-utsns-4ba9b073\x2d4ef9\x2d4219\x2d8526\x2d0d6732e42f4f.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-ipcns-3052fef8\x2d5c07\x2d4001\x2d85ca\x2dc15dd2d71f43.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-ipcns-4ba9b073\x2d4ef9\x2d4219\x2d8526\x2d0d6732e42f4f.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-ipcns-72a5e461\x2d4c92\x2d48b0\x2d8260\x2d31ea857f15d7.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.270357655Z" level=info msg="runSandbox: deleting pod ID a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c from idIndex" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.270397946Z" level=info msg="runSandbox: removing pod sandbox a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.270442540Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.270463653Z" level=info msg="runSandbox: unmounting shmPath for sandbox a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.270368192Z" level=info msg="runSandbox: deleting pod ID 38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97 from idIndex" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.270521795Z" level=info msg="runSandbox: removing pod sandbox 38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.270546628Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.270560484Z" level=info msg="runSandbox: unmounting shmPath for sandbox 38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.272294190Z" level=info msg="runSandbox: deleting pod ID 08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b from idIndex" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.272320323Z" level=info msg="runSandbox: removing pod sandbox 08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.272347891Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.272371883Z" level=info msg="runSandbox: unmounting shmPath for sandbox 08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.275302091Z" level=info msg="runSandbox: removing pod sandbox from storage: a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.276289533Z" level=info msg="runSandbox: removing pod sandbox from storage: 38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.276943492Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.276973356Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=d9394fb3-6d41-4c03-ae99-c3111791ce19 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.277185 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.277469 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.277505 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.277618 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.278304071Z" level=info msg="runSandbox: removing pod sandbox from storage: 08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.278678774Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.278706370Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0832b093-a4aa-4836-9ebe-588bc94031dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.278841 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.278905 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.278940 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.279009 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.280093123Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:44.280118644Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1ce60bb6-9b47-4876-a734-71b946642c6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.280321 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.280374 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.280405 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 17:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:44.280469 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-netns-72a5e461\x2d4c92\x2d48b0\x2d8260\x2d31ea857f15d7.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-netns-3052fef8\x2d5c07\x2d4001\x2d85ca\x2dc15dd2d71f43.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-netns-4ba9b073\x2d4ef9\x2d4219\x2d8526\x2d0d6732e42f4f.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-38f2494c0de6ead08eb856ce8feb96a0f96d066163d955cfa4f8899bf5af2b97-userdata-shm.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-08ed89161eadb5298b8b0cb0943fef27f0afda8cffcfee049fd36ee3a1d4d11b-userdata-shm.mount: Deactivated successfully. Feb 23 17:58:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a7aa96175cdd19219fda6e3e7c07ee39eb5624518c6a45ab8771022f27865c7c-userdata-shm.mount: Deactivated successfully. Feb 23 17:58:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:47.243927690Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=6afa1603-7ea0-48ac-8d75-ba7d93767165 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:58:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:47.244852 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a} Feb 23 17:58:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:54.872814 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:58:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:54.872884 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:56.216778 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:56.216901 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.217232536Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.217306431Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.217341263Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.217374129Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.225427895Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/23d3c853-3862-4305-882a-2683aa479170 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.225463672Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.225433781Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/86fa78c6-70ca-489d-938b-5dec0eaa8f96 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.225570841Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:56.292037 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:56.292313 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:56.292527 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:56.292565 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.874761045Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.874820643Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.874830941Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.874995045Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:58:56 ip-10-0-136-68 systemd[1]: run-utsns-3fcc60b0\x2de110\x2d499d\x2dbf5a\x2ded441abf9fe1.mount: Deactivated successfully. Feb 23 17:58:56 ip-10-0-136-68 systemd[1]: run-ipcns-3fcc60b0\x2de110\x2d499d\x2dbf5a\x2ded441abf9fe1.mount: Deactivated successfully. Feb 23 17:58:56 ip-10-0-136-68 systemd[1]: run-netns-3fcc60b0\x2de110\x2d499d\x2dbf5a\x2ded441abf9fe1.mount: Deactivated successfully. Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.899331600Z" level=info msg="runSandbox: deleting pod ID ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215 from idIndex" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.899372988Z" level=info msg="runSandbox: removing pod sandbox ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.899424043Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.899446758Z" level=info msg="runSandbox: unmounting shmPath for sandbox ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.905295311Z" level=info msg="runSandbox: removing pod sandbox from storage: ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.906883887Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:56.906915543Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=c822d2da-ebbf-4620-8f45-dc8dcb5ef35b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:56.907132 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:56.907202 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:56.907225 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:58:56.907322 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 17:58:57 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ae3a2d57a3db0cb7e29896c2dc01b9a13cecaca02ab3c492f727fff3a20da215-userdata-shm.mount: Deactivated successfully. Feb 23 17:58:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:58.217048 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 17:58:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:58:58.217145 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 17:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:58.217548077Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:58.217593047Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:58.217659138Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:58.217611285Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:58.225005401Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2b0cc546-982f-4548-ab8c-bf98932b07ef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:58.225031471Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:58.225561403Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/96baa5ce-ee71-4b01-b532-53cae3e72e31 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:58:58.225594289Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:59:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:04.873019 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:59:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:04.873079 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:59:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:11.217058 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 17:59:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:11.217477204Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:59:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:11.217534072Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 17:59:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:14.873074 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:59:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:14.873134 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:59:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:22.217781 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:22.218144 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:22.218474 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:22.218512 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:59:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:24.872829 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:59:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:24.872898 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:59:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:26.103997857Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 17:59:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:26.104422919Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/1220deea-4842-4432-8206-dc1f6200b404 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:59:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:26.104452548Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:26.291859 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:26.292074 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:26.292318 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:26.292354 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:34.872613 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 17:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:34.872670 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 17:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:34.872705 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 17:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:34.873309 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 17:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:34.873496 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" gracePeriod=30 Feb 23 17:59:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:34.873733305Z" level=info msg="Stopping container: ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a (timeout: 30s)" id=2e4ff216-3d2e-4c43-a565-ae443f468ae2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:59:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:38.634939469Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=2e4ff216-3d2e-4c43-a565-ae443f468ae2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:59:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-39383917054b85b3852a35375a5dac6576e355b1be333145790efe042bc8b633-merged.mount: Deactivated successfully. Feb 23 17:59:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:38.919152021Z" level=info msg="cleanup sandbox network" Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.240213940Z" level=info msg="NetworkStart: stopping network for sandbox 3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.240365520Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/86fa78c6-70ca-489d-938b-5dec0eaa8f96 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.240395947Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.240404067Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.240411893Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.241039884Z" level=info msg="NetworkStart: stopping network for sandbox ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.241142341Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/23d3c853-3862-4305-882a-2683aa479170 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.241179887Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.241192677Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:41.241202727Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:59:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:42.422930585Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=2e4ff216-3d2e-4c43-a565-ae443f468ae2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:59:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:42.424675172Z" level=info msg="Stopped container ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=2e4ff216-3d2e-4c43-a565-ae443f468ae2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 17:59:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:42.425179 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.067929399Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=60fbc574-71e6-417b-bbdc-ec99581347bf name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240166569Z" level=info msg="NetworkStart: stopping network for sandbox 01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240325867Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/96baa5ce-ee71-4b01-b532-53cae3e72e31 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240362405Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240373196Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240383390Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240383474Z" level=info msg="NetworkStart: stopping network for sandbox 6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240549707Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2b0cc546-982f-4548-ab8c-bf98932b07ef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240586295Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240595114Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 17:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:43.240603594Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 17:59:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:46.816989167Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=1130d2ee-c056-4c4c-8714-6705948afcb6 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:59:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:46.817891 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" exitCode=-1 Feb 23 17:59:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:46.817926 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a} Feb 23 17:59:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:46.817953 2199 scope.go:115] "RemoveContainer" containerID="97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" Feb 23 17:59:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 17:59:47.819837 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 17:59:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:47.820380 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 17:59:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:50.578035184Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=e187f86d-be97-425b-8c03-8e31ab9573ad name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:59:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:54.338118561Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=313ab0d9-2cc9-4df1-a260-8665b9913563 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 17:59:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:54.338662387Z" level=info msg="Removing container: 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992" id=6b4ae052-83b2-43eb-b104-e5dd9c82af2d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:56.292019 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:56.292323 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:56.292606 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 17:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 17:59:56.292653 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 17:59:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:58.086960580Z" level=warning msg="Failed to find container exit file for 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: timed out waiting for the condition" id=6b4ae052-83b2-43eb-b104-e5dd9c82af2d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 17:59:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 17:59:58.099639736Z" level=info msg="Removed container 97f95a5f7e54231d4a0b3721b7233ab0eb8d5808258dad6c3b6c8148bbb6a992: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=6b4ae052-83b2-43eb-b104-e5dd9c82af2d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:00:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:01.216980 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:00:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:01.217391 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:02.586962500Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=4039b916-c6d7-471f-8ace-d4c241ea08a6 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:00:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:11.116135486Z" level=info msg="NetworkStart: stopping network for sandbox 24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:11.116231853Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:00:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:11.116444999Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:00:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:16.217147 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:00:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:16.217814 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:00:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:20.172625481Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=a5578103-f03e-495d-aeb9-4532e9ed2155 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:00:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:20.172807118Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a5578103-f03e-495d-aeb9-4532e9ed2155 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.252224654Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.252300019Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.252322778Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.252353166Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 systemd[1]: run-utsns-23d3c853\x2d3862\x2d4305\x2d882a\x2d2683aa479170.mount: Deactivated successfully. Feb 23 18:00:26 ip-10-0-136-68 systemd[1]: run-utsns-86fa78c6\x2d70ca\x2d489d\x2d938b\x2d5dec0eaa8f96.mount: Deactivated successfully. Feb 23 18:00:26 ip-10-0-136-68 systemd[1]: run-ipcns-23d3c853\x2d3862\x2d4305\x2d882a\x2d2683aa479170.mount: Deactivated successfully. Feb 23 18:00:26 ip-10-0-136-68 systemd[1]: run-ipcns-86fa78c6\x2d70ca\x2d489d\x2d938b\x2d5dec0eaa8f96.mount: Deactivated successfully. Feb 23 18:00:26 ip-10-0-136-68 systemd[1]: run-netns-86fa78c6\x2d70ca\x2d489d\x2d938b\x2d5dec0eaa8f96.mount: Deactivated successfully. Feb 23 18:00:26 ip-10-0-136-68 systemd[1]: run-netns-23d3c853\x2d3862\x2d4305\x2d882a\x2d2683aa479170.mount: Deactivated successfully. Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.274328987Z" level=info msg="runSandbox: deleting pod ID 3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d from idIndex" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.274370207Z" level=info msg="runSandbox: removing pod sandbox 3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.274401594Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.274416323Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.277318329Z" level=info msg="runSandbox: deleting pod ID ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130 from idIndex" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.277356549Z" level=info msg="runSandbox: removing pod sandbox ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.277403017Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.277424556Z" level=info msg="runSandbox: unmounting shmPath for sandbox ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.281296451Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.282844180Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.282876134Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=63f1bac8-6227-47f5-9de5-00d71c223cc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.283117 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.283184 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.283223 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.283323 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.283321083Z" level=info msg="runSandbox: removing pod sandbox from storage: ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.284756677Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:26.284782746Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0cf5314c-d505-4998-9af8-517f7000f7ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.284935 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.284978 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.285000 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.285054 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.291719 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.291988 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.292207 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:26.292238 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:00:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:27.216816 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:00:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:27.217184 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:00:27 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3d2b833da0cb87e5ffe174549ba16a2e873512b9732399a0b28fe97a93a3363d-userdata-shm.mount: Deactivated successfully. Feb 23 18:00:27 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ab13ea873e29f84e4109f8c8005f3577c5df500d90fde95cb26b50d826328130-userdata-shm.mount: Deactivated successfully. Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.249653072Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.249713534Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.249673804Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.249808037Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 systemd[1]: run-utsns-96baa5ce\x2dee71\x2d4b01\x2db532\x2d53cae3e72e31.mount: Deactivated successfully. Feb 23 18:00:28 ip-10-0-136-68 systemd[1]: run-utsns-2b0cc546\x2d982f\x2d4548\x2dab8c\x2dbf98932b07ef.mount: Deactivated successfully. Feb 23 18:00:28 ip-10-0-136-68 systemd[1]: run-ipcns-96baa5ce\x2dee71\x2d4b01\x2db532\x2d53cae3e72e31.mount: Deactivated successfully. Feb 23 18:00:28 ip-10-0-136-68 systemd[1]: run-ipcns-2b0cc546\x2d982f\x2d4548\x2dab8c\x2dbf98932b07ef.mount: Deactivated successfully. Feb 23 18:00:28 ip-10-0-136-68 systemd[1]: run-netns-96baa5ce\x2dee71\x2d4b01\x2db532\x2d53cae3e72e31.mount: Deactivated successfully. Feb 23 18:00:28 ip-10-0-136-68 systemd[1]: run-netns-2b0cc546\x2d982f\x2d4548\x2dab8c\x2dbf98932b07ef.mount: Deactivated successfully. Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.272324244Z" level=info msg="runSandbox: deleting pod ID 01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b from idIndex" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.272360448Z" level=info msg="runSandbox: removing pod sandbox 01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.272400224Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.272424674Z" level=info msg="runSandbox: unmounting shmPath for sandbox 01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.273318624Z" level=info msg="runSandbox: deleting pod ID 6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347 from idIndex" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.273349457Z" level=info msg="runSandbox: removing pod sandbox 6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.273376891Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.273390794Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.277411833Z" level=info msg="runSandbox: removing pod sandbox from storage: 01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.277427791Z" level=info msg="runSandbox: removing pod sandbox from storage: 6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.278977135Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.279005479Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=f88437c3-3876-44da-b962-137ad557518a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:28.279327 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:28.279522 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:28.279564 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:28.279645 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.280506018Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:28.280531482Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=89fe4055-6b97-462a-9354-bfabac1a864c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:28.280690 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:28.280732 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:28.280754 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:28.280808 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:00:29 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-01f199e94930da29dd5b720f614fe2663d739f67f184de5d8daef25afa2aff4b-userdata-shm.mount: Deactivated successfully. Feb 23 18:00:29 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6779b72a79b080fe7a6c8640efca9d529d37828dc1728c4244a5c332af3b3347-userdata-shm.mount: Deactivated successfully. Feb 23 18:00:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:37.216617 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:00:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:37.216995477Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:37.217047739Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:00:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:37.222570663Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c554327a-4555-4ae0-85f1-250a3382ff42 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:00:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:37.222598226Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:00:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:38.920979188Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 18:00:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:38.921016778Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/1220deea-4842-4432-8206-dc1f6200b404 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:00:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:38.921122943Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:00:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:38.921133543Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:00:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:38.921146000Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:00:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:39.217222 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:00:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:39.217660192Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:39.217713868Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:00:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:39.222826163Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/93747ccf-33c6-42e7-a9bf-2065c276b30e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:00:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:39.222850012Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:00:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:40.217360 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:40.217912 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:40.218389 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:40.218641 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:40.218954 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:40.218989 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:00:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:41.216790 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:41.217212293Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:41.217334816Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:41.226053160Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/281bd3be-d22c-4126-a6c3-7d0fd976eb4d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:41.226078353Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:00:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:42.216681 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:00:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:42.217169881Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:00:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:42.217239054Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:00:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:42.222871696Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/19cb9740-088f-4c65-9e17-9f38ef19a97b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:00:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:42.222912472Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:00:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:00:51.216655 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:00:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:51.217174 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:56.292310 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:56.292590 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:56.292804 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:00:56.292840 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:00:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:00:58.143826677Z" level=info msg="cleanup sandbox network" Feb 23 18:01:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:01:02.217277 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:01:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:02.217830 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.117525168Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.117578220Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.117582186Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.117812366Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:01:11 ip-10-0-136-68 systemd[1]: run-utsns-1220deea\x2d4842\x2d4432\x2d8206\x2ddc1f6200b404.mount: Deactivated successfully. Feb 23 18:01:11 ip-10-0-136-68 systemd[1]: run-ipcns-1220deea\x2d4842\x2d4432\x2d8206\x2ddc1f6200b404.mount: Deactivated successfully. Feb 23 18:01:11 ip-10-0-136-68 systemd[1]: run-netns-1220deea\x2d4842\x2d4432\x2d8206\x2ddc1f6200b404.mount: Deactivated successfully. Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.134330035Z" level=info msg="runSandbox: deleting pod ID 24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e from idIndex" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.134396937Z" level=info msg="runSandbox: removing pod sandbox 24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.134437740Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.134462676Z" level=info msg="runSandbox: unmounting shmPath for sandbox 24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e-userdata-shm.mount: Deactivated successfully. Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.139314954Z" level=info msg="runSandbox: removing pod sandbox from storage: 24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.140845380Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:11.140880838Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=776fd89c-e2e6-4703-ab31-60145d1f9257 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:11.141113 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:01:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:11.141176 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:01:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:11.141198 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:01:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:11.141311 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(24ac9f22b6f38c3e18e02b2606ba6121b4f87f4f92fa99d5a3e039950193538e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:01:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:01:17.216951 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:01:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:17.217396 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:01:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:22.234468262Z" level=info msg="NetworkStart: stopping network for sandbox 6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:22.234573498Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c554327a-4555-4ae0-85f1-250a3382ff42 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:01:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:22.234601090Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:01:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:22.234609868Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:01:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:22.234619083Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:24.234816235Z" level=info msg="NetworkStart: stopping network for sandbox 562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:24.234948480Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/93747ccf-33c6-42e7-a9bf-2065c276b30e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:24.234989537Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:24.235000786Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:24.235010577Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:01:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:01:25.216866 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:01:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:25.217338496Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:25.217406723Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:26.238175757Z" level=info msg="NetworkStart: stopping network for sandbox 93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:26.238419299Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/281bd3be-d22c-4126-a6c3-7d0fd976eb4d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:26.238462374Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:26.238473246Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:26.238482589Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:26.291773 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:26.292026 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:26.292286 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:26.292317 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:01:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:27.234587587Z" level=info msg="NetworkStart: stopping network for sandbox 606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:01:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:27.234705750Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/19cb9740-088f-4c65-9e17-9f38ef19a97b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:01:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:27.234733820Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:01:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:27.234744735Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:01:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:27.234754255Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:01:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:01:29.216658 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:01:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:29.217048 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:01:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:01:44.217330 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:01:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:44.217897 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:01:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:56.126728978Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" Feb 23 18:01:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:56.127116045Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/e9a1f51d-c4e9-456e-b04b-3b8525fa1705 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:01:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:01:56.127139616Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:56.292346 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:56.292654 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:56.292851 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:56.292883 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:01:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:01:58.217105 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:01:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:01:58.217532 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:02:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:05.216894 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:05.217197 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:05.217539 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:05.217579 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.243672192Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.243732234Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 systemd[1]: run-utsns-c554327a\x2d4555\x2d4ae0\x2d85f1\x2d250a3382ff42.mount: Deactivated successfully. Feb 23 18:02:07 ip-10-0-136-68 systemd[1]: run-ipcns-c554327a\x2d4555\x2d4ae0\x2d85f1\x2d250a3382ff42.mount: Deactivated successfully. Feb 23 18:02:07 ip-10-0-136-68 systemd[1]: run-netns-c554327a\x2d4555\x2d4ae0\x2d85f1\x2d250a3382ff42.mount: Deactivated successfully. Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.274345803Z" level=info msg="runSandbox: deleting pod ID 6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db from idIndex" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.274399104Z" level=info msg="runSandbox: removing pod sandbox 6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.274455770Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.274475521Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db-userdata-shm.mount: Deactivated successfully. Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.279315003Z" level=info msg="runSandbox: removing pod sandbox from storage: 6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.280974825Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:07.281008579Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=75c13d7a-086e-408f-b815-4df10f861161 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:07.281277 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:02:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:07.281348 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:02:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:07.281384 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:02:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:07.281461 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6b74e5235d02d87dc805a495fad4b68ad3d7dbc60ad3e4c851f3f7abc2df99db): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.244634059Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.244714079Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 systemd[1]: run-utsns-93747ccf\x2d33c6\x2d42e7\x2da9bf\x2d2065c276b30e.mount: Deactivated successfully. Feb 23 18:02:09 ip-10-0-136-68 systemd[1]: run-ipcns-93747ccf\x2d33c6\x2d42e7\x2da9bf\x2d2065c276b30e.mount: Deactivated successfully. Feb 23 18:02:09 ip-10-0-136-68 systemd[1]: run-netns-93747ccf\x2d33c6\x2d42e7\x2da9bf\x2d2065c276b30e.mount: Deactivated successfully. Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.266324157Z" level=info msg="runSandbox: deleting pod ID 562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff from idIndex" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.266367783Z" level=info msg="runSandbox: removing pod sandbox 562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.266410424Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.266431161Z" level=info msg="runSandbox: unmounting shmPath for sandbox 562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff-userdata-shm.mount: Deactivated successfully. Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.273301746Z" level=info msg="runSandbox: removing pod sandbox from storage: 562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.274853339Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:09.274882985Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=652537c9-1300-4e2b-9fe7-288123fbaeb0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:09.275117 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:02:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:09.275175 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:02:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:09.275198 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:02:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:09.275326 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(562a7c1ddcee1890b43aaf6db88e1131b3a5574e0284e90894acbdc51037ccff): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.249405322Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.249459438Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 systemd[1]: run-utsns-281bd3be\x2dd22c\x2d4126\x2da6c3\x2d7d0fd976eb4d.mount: Deactivated successfully. Feb 23 18:02:11 ip-10-0-136-68 systemd[1]: run-ipcns-281bd3be\x2dd22c\x2d4126\x2da6c3\x2d7d0fd976eb4d.mount: Deactivated successfully. Feb 23 18:02:11 ip-10-0-136-68 systemd[1]: run-netns-281bd3be\x2dd22c\x2d4126\x2da6c3\x2d7d0fd976eb4d.mount: Deactivated successfully. Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.269336665Z" level=info msg="runSandbox: deleting pod ID 93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117 from idIndex" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.269385121Z" level=info msg="runSandbox: removing pod sandbox 93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.269422773Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.269443307Z" level=info msg="runSandbox: unmounting shmPath for sandbox 93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117-userdata-shm.mount: Deactivated successfully. Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.281310209Z" level=info msg="runSandbox: removing pod sandbox from storage: 93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.282869171Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:11.282905709Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=a9bd393b-fa2c-4cce-aed1-ef37a4bc49f5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:11.283128 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:02:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:11.283193 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:02:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:11.283218 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:02:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:11.283312 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(93d4b7a0f91539f2a738508217fbb3fbfc22d54b3741a0f52f21055f7aec1117): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:02:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:12.217032 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:02:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:12.217494 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.244767803Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.244817860Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 systemd[1]: run-utsns-19cb9740\x2d088f\x2d4c65\x2d9e17\x2d9f38ef19a97b.mount: Deactivated successfully. Feb 23 18:02:12 ip-10-0-136-68 systemd[1]: run-ipcns-19cb9740\x2d088f\x2d4c65\x2d9e17\x2d9f38ef19a97b.mount: Deactivated successfully. Feb 23 18:02:12 ip-10-0-136-68 systemd[1]: run-netns-19cb9740\x2d088f\x2d4c65\x2d9e17\x2d9f38ef19a97b.mount: Deactivated successfully. Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.259334775Z" level=info msg="runSandbox: deleting pod ID 606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753 from idIndex" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.259521792Z" level=info msg="runSandbox: removing pod sandbox 606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.259622878Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.259669845Z" level=info msg="runSandbox: unmounting shmPath for sandbox 606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753-userdata-shm.mount: Deactivated successfully. Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.265307624Z" level=info msg="runSandbox: removing pod sandbox from storage: 606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.266752549Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:12.266781341Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=d7c08906-d9bf-4d0d-adfc-d94b8d7eb950 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:12.266952 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:02:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:12.267001 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:02:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:12.267026 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:02:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:12.267084 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(606a36fd31ce6fce900d4ea0b2325d466502beb6576c009a3c0f6217639f1753): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:02:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:21.217281 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:02:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:21.217281 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:21.217731288Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:21.217801486Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:21.217740704Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:21.217878895Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:21.225352306Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5644b1e5-d0f8-4c35-89f2-c9e0fe7afc6b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:21.225388303Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:21.225828605Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/25b7a8e2-4148-4c3f-894e-c6b633b75fef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:21.225852967Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:02:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:23.216390 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:23.216809651Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:23.216864824Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:23.221856377Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/7a89e895-9cfc-4174-9cee-150595489287 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:23.221895233Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:02:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:24.960259446Z" level=info msg="cleanup sandbox network" Feb 23 18:02:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:25.217392 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:25.217854281Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:25.217912629Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:25.223442499Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/37c0ea23-2c43-43e5-92b8-20c8bc5c1fd7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:25.223477748Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:02:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:26.216965 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.217822584Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=f130910a-3a0b-4e5d-b243-e427c7e6b18e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.218155020Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=f130910a-3a0b-4e5d-b243-e427c7e6b18e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.218834845Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=4a6a9506-1515-417f-9172-68b6703df201 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.219075897Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=4a6a9506-1515-417f-9172-68b6703df201 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.219797293Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b5164122-7454-4552-a606-6692edf080a9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.219891144Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:02:26 ip-10-0-136-68 systemd[1]: Started crio-conmon-d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5.scope. Feb 23 18:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:26.292281 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:26.292488 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:26.292768 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:26.292804 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:02:26 ip-10-0-136-68 systemd[1]: Started libcontainer container d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5. Feb 23 18:02:26 ip-10-0-136-68 conmon[5531]: conmon d0ae2fe5258fb4e0801d : Failed to write to cgroup.event_control Operation not supported Feb 23 18:02:26 ip-10-0-136-68 systemd[1]: crio-conmon-d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5.scope: Deactivated successfully. Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.347203007Z" level=info msg="Created container d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b5164122-7454-4552-a606-6692edf080a9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.350761414Z" level=info msg="Starting container: d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" id=37e74c6c-f2f1-4616-ab97-d612afc77a76 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:02:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:26.358286459Z" level=info msg="Started container" PID=5543 containerID=d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=37e74c6c-f2f1-4616-ab97-d612afc77a76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:02:26 ip-10-0-136-68 systemd[1]: crio-d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5.scope: Deactivated successfully. Feb 23 18:02:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:30.608211494Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=fe2f6208-1938-4cd4-8251-6945d2adc1a0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:02:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:30.609325 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5} Feb 23 18:02:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:41.138899194Z" level=info msg="NetworkStart: stopping network for sandbox 6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:02:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:41.139018361Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:02:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:02:41.139197971Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:02:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:44.872882 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:02:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:44.872946 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:02:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:54.872102 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:02:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:02:54.872166 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:56.292817 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:56.293134 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:56.293390 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:02:56.293420 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:03:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:04.872530 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:03:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:04.872605 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.240664157Z" level=info msg="NetworkStart: stopping network for sandbox f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.240781165Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5644b1e5-d0f8-4c35-89f2-c9e0fe7afc6b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.240811251Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.240819846Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.240826713Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.241046884Z" level=info msg="NetworkStart: stopping network for sandbox 0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.241146734Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/25b7a8e2-4148-4c3f-894e-c6b633b75fef Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.241183031Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.241193646Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:06.241203253Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:03:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:08.233131703Z" level=info msg="NetworkStart: stopping network for sandbox 6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:08.233270381Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/7a89e895-9cfc-4174-9cee-150595489287 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:03:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:08.233304106Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:03:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:08.233314935Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:03:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:08.233321291Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:03:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:10.237123423Z" level=info msg="NetworkStart: stopping network for sandbox ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:10.237276232Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/37c0ea23-2c43-43e5-92b8-20c8bc5c1fd7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:03:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:10.237317281Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:03:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:10.237332098Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:03:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:10.237344592Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:03:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:14.872792 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:03:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:14.872864 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:24.216998 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:24.217618 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:24.218079 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:24.218239 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:24.872931 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:24.872988 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:24.873021 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:24.873574 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:03:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:24.873745 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" gracePeriod=30 Feb 23 18:03:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:24.873978018Z" level=info msg="Stopping container: d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5 (timeout: 30s)" id=8f0d53f8-fdd2-4576-89d1-69c30a425508 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:03:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:24.961901282Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 18:03:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:24.961957342Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/e9a1f51d-c4e9-456e-b04b-3b8525fa1705 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:03:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:24.962000248Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:03:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:24.962010956Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:03:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:24.962020458Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:26.292468 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:26.292720 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:26.293000 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:26.293033 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:03:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:28.635014046Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=8f0d53f8-fdd2-4576-89d1-69c30a425508 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:03:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2ed11e43a29a549655e8b20dd8df292d4f2dcb0d4a118e73f3684c43666f99b1-merged.mount: Deactivated successfully. Feb 23 18:03:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:32.419006103Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=8f0d53f8-fdd2-4576-89d1-69c30a425508 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:03:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:32.420691000Z" level=info msg="Stopped container d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=8f0d53f8-fdd2-4576-89d1-69c30a425508 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:03:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:32.421189 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:03:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:32.439660649Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=6098797b-5e89-4e4b-b4bb-d2cc1d4639a8 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:03:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:36.201132952Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=1e37eabf-f096-4ed4-b8cf-5bac32f7550e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:03:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:36.202193 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" exitCode=-1 Feb 23 18:03:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:36.202231 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5} Feb 23 18:03:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:36.202282 2199 scope.go:115] "RemoveContainer" containerID="ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" Feb 23 18:03:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:37.204376 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:03:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:37.204799 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:03:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:39.951544955Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=1a2a0b15-fd0e-48ef-a68c-e8ea4442e0da name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.140341782Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.140405013Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 systemd[1]: run-utsns-e9a1f51d\x2dc4e9\x2d456e\x2db04b\x2d3b8525fa1705.mount: Deactivated successfully. Feb 23 18:03:41 ip-10-0-136-68 systemd[1]: run-ipcns-e9a1f51d\x2dc4e9\x2d456e\x2db04b\x2d3b8525fa1705.mount: Deactivated successfully. Feb 23 18:03:41 ip-10-0-136-68 systemd[1]: run-netns-e9a1f51d\x2dc4e9\x2d456e\x2db04b\x2d3b8525fa1705.mount: Deactivated successfully. Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.162315022Z" level=info msg="runSandbox: deleting pod ID 6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702 from idIndex" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.162353794Z" level=info msg="runSandbox: removing pod sandbox 6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.162391919Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.162412247Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702-userdata-shm.mount: Deactivated successfully. Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.170317210Z" level=info msg="runSandbox: removing pod sandbox from storage: 6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.171994079Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:41.172025816Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=7e9ab06d-87f7-4852-b4ed-d4164fedd39a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:41.172279 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:03:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:41.172350 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:03:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:41.172391 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:03:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:41.172479 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(6ea140c3d0f18ae87a164abad0b6a6c8c5332fc361e2ad1c280f2cc5d649f702): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:03:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:43.714972430Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=d2d2bbec-eb92-4f14-b18d-643020a16b4b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:03:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:43.715529323Z" level=info msg="Removing container: ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a" id=b03998ed-1e34-4615-a13c-fd3834350573 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:03:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:47.464922648Z" level=warning msg="Failed to find container exit file for ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: timed out waiting for the condition" id=b03998ed-1e34-4615-a13c-fd3834350573 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:03:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:47.476833393Z" level=info msg="Removed container ba010c6ed03924d8b7cc43a54161474cdb59749968420afd1b8c8f2888c1ba3a: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b03998ed-1e34-4615-a13c-fd3834350573 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:03:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:49.216775 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:03:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:49.217156 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.252514568Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.252749569Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.253330784Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.253370484Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 systemd[1]: run-utsns-25b7a8e2\x2d4148\x2d4c3f\x2d894e\x2dc6b633b75fef.mount: Deactivated successfully. Feb 23 18:03:51 ip-10-0-136-68 systemd[1]: run-utsns-5644b1e5\x2dd0f8\x2d4c35\x2d89f2\x2dc9e0fe7afc6b.mount: Deactivated successfully. Feb 23 18:03:51 ip-10-0-136-68 systemd[1]: run-ipcns-25b7a8e2\x2d4148\x2d4c3f\x2d894e\x2dc6b633b75fef.mount: Deactivated successfully. Feb 23 18:03:51 ip-10-0-136-68 systemd[1]: run-ipcns-5644b1e5\x2dd0f8\x2d4c35\x2d89f2\x2dc9e0fe7afc6b.mount: Deactivated successfully. Feb 23 18:03:51 ip-10-0-136-68 systemd[1]: run-netns-25b7a8e2\x2d4148\x2d4c3f\x2d894e\x2dc6b633b75fef.mount: Deactivated successfully. Feb 23 18:03:51 ip-10-0-136-68 systemd[1]: run-netns-5644b1e5\x2dd0f8\x2d4c35\x2d89f2\x2dc9e0fe7afc6b.mount: Deactivated successfully. Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.277314548Z" level=info msg="runSandbox: deleting pod ID 0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d from idIndex" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.277350154Z" level=info msg="runSandbox: removing pod sandbox 0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.277389878Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.277415468Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.280322314Z" level=info msg="runSandbox: deleting pod ID f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494 from idIndex" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.280353369Z" level=info msg="runSandbox: removing pod sandbox f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.280379383Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.280395536Z" level=info msg="runSandbox: unmounting shmPath for sandbox f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.283297515Z" level=info msg="runSandbox: removing pod sandbox from storage: 0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.284863762Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.284894578Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=127c538d-a878-40ae-86d2-c86b89254913 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:51.285103 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:51.285155 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:51.285183 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:51.285238 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.285328196Z" level=info msg="runSandbox: removing pod sandbox from storage: f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.286827463Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.286856792Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=fbef9cca-63c5-4314-a1e5-88ed16497c43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:51.287022 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:51.287064 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:51.287087 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:51.287142 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:51.980942524Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=3d75b196-c04e-4ef7-a84e-4985fedd2da8 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:03:52 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0e1d22b73c757b6981fdc1fa0e0f6308a70d229d9eba2bfb327ec589b694d01d-userdata-shm.mount: Deactivated successfully. Feb 23 18:03:52 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f154e0242c622c41faa4028af641cc3d26304c501bf7f50836a95be9bb202494-userdata-shm.mount: Deactivated successfully. Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.243567000Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.243624429Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 systemd[1]: run-utsns-7a89e895\x2d9cfc\x2d4174\x2d9cee\x2d150595489287.mount: Deactivated successfully. Feb 23 18:03:53 ip-10-0-136-68 systemd[1]: run-ipcns-7a89e895\x2d9cfc\x2d4174\x2d9cee\x2d150595489287.mount: Deactivated successfully. Feb 23 18:03:53 ip-10-0-136-68 systemd[1]: run-netns-7a89e895\x2d9cfc\x2d4174\x2d9cee\x2d150595489287.mount: Deactivated successfully. Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.268319843Z" level=info msg="runSandbox: deleting pod ID 6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0 from idIndex" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.268358374Z" level=info msg="runSandbox: removing pod sandbox 6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.268386293Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.268400576Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0-userdata-shm.mount: Deactivated successfully. Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.284325462Z" level=info msg="runSandbox: removing pod sandbox from storage: 6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.285958872Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:53.285988646Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a7046fb2-70eb-4989-858e-5e5263291e1e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:53.286215 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:03:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:53.286310 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:03:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:53.286338 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:03:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:53.286396 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6418315f8ef430cb3d3a290941e479c2bc209f64d9cbac72a8da9087bfec5ce0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.247234693Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.247322585Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 systemd[1]: run-utsns-37c0ea23\x2d2c43\x2d43e5\x2d92b8\x2d20c8bc5c1fd7.mount: Deactivated successfully. Feb 23 18:03:55 ip-10-0-136-68 systemd[1]: run-ipcns-37c0ea23\x2d2c43\x2d43e5\x2d92b8\x2d20c8bc5c1fd7.mount: Deactivated successfully. Feb 23 18:03:55 ip-10-0-136-68 systemd[1]: run-netns-37c0ea23\x2d2c43\x2d43e5\x2d92b8\x2d20c8bc5c1fd7.mount: Deactivated successfully. Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.275337630Z" level=info msg="runSandbox: deleting pod ID ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9 from idIndex" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.275374629Z" level=info msg="runSandbox: removing pod sandbox ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.275410105Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.275438403Z" level=info msg="runSandbox: unmounting shmPath for sandbox ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9-userdata-shm.mount: Deactivated successfully. Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.282312133Z" level=info msg="runSandbox: removing pod sandbox from storage: ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.283850900Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:55.283882525Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=bbb05174-922c-451f-8acc-19443fa80c6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:55.284090 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:03:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:55.284142 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:03:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:55.284171 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:03:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:55.284226 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ee39043ffdc76b7adfbff34db8f4af3a3f90bf1f73964d48c1938b20f9adfec9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:03:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:03:56.216691 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:03:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:56.217104588Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:03:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:56.217174801Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:03:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:56.222723560Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/50cd8d01-f216-4b74-86a1-639d6edceed1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:03:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:03:56.222752401Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:56.292282 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:56.292542 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:56.292808 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:03:56.292840 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:04:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:01.216423 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:04:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:01.216981 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:04:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:02.216501 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:04:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:02.216940804Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:02.217016286Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:04:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:02.222693786Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4ecc25b4-4b8c-4e89-abb8-c55d024d48b3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:02.222730409Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:06.216744 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:04:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:06.216836 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:06.217164995Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:06.217229698Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:06.217169178Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:06.217362210Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:06.224626677Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7d17d4f7-7386-4cb5-ae3b-929cef5c894a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:06.224653837Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:06.225124435Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c1e505f6-e518-4517-94b7-2605c8637644 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:06.225152391Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:08.210767870Z" level=info msg="cleanup sandbox network" Feb 23 18:04:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:08.216956 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:04:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:08.217343325Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:08.217391399Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:04:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:08.222853987Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/898972c0-3d0c-4fd4-b67e-0ea5fbb0aec7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:08.222888460Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:13.217047 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:04:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:13.217452 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:04:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:25.216411 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:04:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:25.216959 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:26.291961 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:26.292314 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:26.292578 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:26.292606 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:04:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:40.217027 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:04:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:40.217470 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:04:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:41.235047541Z" level=info msg="NetworkStart: stopping network for sandbox 33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:41.235111781Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:41.235345535Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:47.236337618Z" level=info msg="NetworkStart: stopping network for sandbox 8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:47.236494412Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4ecc25b4-4b8c-4e89-abb8-c55d024d48b3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:47.236535698Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:04:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:47.236545934Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:04:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:47.236555999Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.238381872Z" level=info msg="NetworkStart: stopping network for sandbox cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.238528937Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c1e505f6-e518-4517-94b7-2605c8637644 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.238565289Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.238575842Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.238584671Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.239384128Z" level=info msg="NetworkStart: stopping network for sandbox 78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.239487156Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7d17d4f7-7386-4cb5-ae3b-929cef5c894a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.239527170Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.239540416Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:51.239551034Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:04:53.217113 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:04:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:53.217327 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:53.218073 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:53.218221 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:04:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:53.218382 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:53.218439 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:04:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:53.234111547Z" level=info msg="NetworkStart: stopping network for sandbox 302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:04:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:53.234231385Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/898972c0-3d0c-4fd4-b67e-0ea5fbb0aec7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:04:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:53.234294951Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:04:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:53.234307714Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:04:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:04:53.234317690Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:56.292383 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:56.292641 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:56.292828 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:04:56.292850 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:05:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:05.216916 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:05:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:05.217486 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:08.212506914Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 18:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:08.212572739Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/50cd8d01-f216-4b74-86a1-639d6edceed1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:08.212634484Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:08.212644789Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:08.212655697Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:05:11 ip-10-0-136-68 systemd[1]: Starting Cleanup of Temporary Directories... Feb 23 18:05:11 ip-10-0-136-68 systemd-tmpfiles[5892]: /usr/lib/tmpfiles.d/tmp.conf:12: Duplicate line for path "/var/tmp", ignoring. Feb 23 18:05:11 ip-10-0-136-68 systemd-tmpfiles[5892]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Feb 23 18:05:11 ip-10-0-136-68 systemd-tmpfiles[5892]: /usr/lib/tmpfiles.d/var.conf:19: Duplicate line for path "/var/cache", ignoring. Feb 23 18:05:11 ip-10-0-136-68 systemd-tmpfiles[5892]: /usr/lib/tmpfiles.d/var.conf:21: Duplicate line for path "/var/lib", ignoring. Feb 23 18:05:11 ip-10-0-136-68 systemd-tmpfiles[5892]: /usr/lib/tmpfiles.d/var.conf:23: Duplicate line for path "/var/spool", ignoring. Feb 23 18:05:11 ip-10-0-136-68 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 23 18:05:11 ip-10-0-136-68 systemd[1]: Finished Cleanup of Temporary Directories. Feb 23 18:05:11 ip-10-0-136-68 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 23 18:05:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:19.216489 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:05:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:19.217012 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:05:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:20.175405867Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=161cc9d8-85f2-4bdb-a029-c9bd65247283 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:05:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:20.175810995Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=161cc9d8-85f2-4bdb-a029-c9bd65247283 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:26.292572 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:26.292810 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:26.293052 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:26.293092 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:05:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:32.217309 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:05:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:32.217699 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.247034947Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.247082443Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 systemd[1]: run-utsns-4ecc25b4\x2d4b8c\x2d4e89\x2dabb8\x2dc55d024d48b3.mount: Deactivated successfully. Feb 23 18:05:32 ip-10-0-136-68 systemd[1]: run-ipcns-4ecc25b4\x2d4b8c\x2d4e89\x2dabb8\x2dc55d024d48b3.mount: Deactivated successfully. Feb 23 18:05:32 ip-10-0-136-68 systemd[1]: run-netns-4ecc25b4\x2d4b8c\x2d4e89\x2dabb8\x2dc55d024d48b3.mount: Deactivated successfully. Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.276393403Z" level=info msg="runSandbox: deleting pod ID 8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672 from idIndex" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.276428120Z" level=info msg="runSandbox: removing pod sandbox 8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.276458339Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.276473677Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672-userdata-shm.mount: Deactivated successfully. Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.282327978Z" level=info msg="runSandbox: removing pod sandbox from storage: 8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.284136058Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:32.284163753Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=017c5267-b6b7-4bb9-985a-e40869c84766 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:32.284391 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:05:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:32.284553 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:05:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:32.284576 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:05:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:32.284651 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8c7ac92d78d99429910644137d9ec5a979d9a2b6b6283082c87c56fa95dc6672): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.247971887Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.248032226Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.248291369Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.248331624Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 systemd[1]: run-utsns-c1e505f6\x2de518\x2d4517\x2d94b7\x2d2605c8637644.mount: Deactivated successfully. Feb 23 18:05:36 ip-10-0-136-68 systemd[1]: run-utsns-7d17d4f7\x2d7386\x2d4cb5\x2dae3b\x2d929cef5c894a.mount: Deactivated successfully. Feb 23 18:05:36 ip-10-0-136-68 systemd[1]: run-ipcns-c1e505f6\x2de518\x2d4517\x2d94b7\x2d2605c8637644.mount: Deactivated successfully. Feb 23 18:05:36 ip-10-0-136-68 systemd[1]: run-ipcns-7d17d4f7\x2d7386\x2d4cb5\x2dae3b\x2d929cef5c894a.mount: Deactivated successfully. Feb 23 18:05:36 ip-10-0-136-68 systemd[1]: run-netns-c1e505f6\x2de518\x2d4517\x2d94b7\x2d2605c8637644.mount: Deactivated successfully. Feb 23 18:05:36 ip-10-0-136-68 systemd[1]: run-netns-7d17d4f7\x2d7386\x2d4cb5\x2dae3b\x2d929cef5c894a.mount: Deactivated successfully. Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.267326929Z" level=info msg="runSandbox: deleting pod ID 78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391 from idIndex" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.267365343Z" level=info msg="runSandbox: removing pod sandbox 78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.267399832Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.267420123Z" level=info msg="runSandbox: unmounting shmPath for sandbox 78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.267330237Z" level=info msg="runSandbox: deleting pod ID cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53 from idIndex" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.267464072Z" level=info msg="runSandbox: removing pod sandbox cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.267486383Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.267499302Z" level=info msg="runSandbox: unmounting shmPath for sandbox cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.272298610Z" level=info msg="runSandbox: removing pod sandbox from storage: 78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.272319274Z" level=info msg="runSandbox: removing pod sandbox from storage: cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.273840673Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.273874257Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=02b7250d-8c9d-4dc5-b428-bbde6c89d3df name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:36.274078 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:36.274437 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:36.274479 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:36.274561 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.275299700Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:36.275333375Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=2f646c94-62ff-4f34-ad77-dabf21651499 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:36.275469 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:36.275512 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:36.275531 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:36.275579 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:05:37 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cc980d05cd4751a2c572718fa1bb24807831eda4a9cd084da83b2ae03f50da53-userdata-shm.mount: Deactivated successfully. Feb 23 18:05:37 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-78cde0b8cdd07a7ba91d84af0a6655fe62b6517c932c22824fc267b51bb41391-userdata-shm.mount: Deactivated successfully. Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.243630267Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.243678989Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 systemd[1]: run-utsns-898972c0\x2d3d0c\x2d4fd4\x2db67e\x2d0ea5fbb0aec7.mount: Deactivated successfully. Feb 23 18:05:38 ip-10-0-136-68 systemd[1]: run-ipcns-898972c0\x2d3d0c\x2d4fd4\x2db67e\x2d0ea5fbb0aec7.mount: Deactivated successfully. Feb 23 18:05:38 ip-10-0-136-68 systemd[1]: run-netns-898972c0\x2d3d0c\x2d4fd4\x2db67e\x2d0ea5fbb0aec7.mount: Deactivated successfully. Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.267324167Z" level=info msg="runSandbox: deleting pod ID 302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e from idIndex" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.267358556Z" level=info msg="runSandbox: removing pod sandbox 302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.267385560Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.267399861Z" level=info msg="runSandbox: unmounting shmPath for sandbox 302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e-userdata-shm.mount: Deactivated successfully. Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.278317150Z" level=info msg="runSandbox: removing pod sandbox from storage: 302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.279921204Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:38.279951502Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4e67988a-5ba4-4f40-a070-60b58551c5af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:38.280092 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:05:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:38.280140 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:05:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:38.280162 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:05:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:38.280215 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(302db01b4418cabc86b8d6e77a07613a751704086d9862ec9889b1b3ff1e489e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.236183773Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.236223542Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 systemd[1]: run-utsns-50cd8d01\x2df216\x2d4b74\x2d86a1\x2d639d6edceed1.mount: Deactivated successfully. Feb 23 18:05:41 ip-10-0-136-68 systemd[1]: run-ipcns-50cd8d01\x2df216\x2d4b74\x2d86a1\x2d639d6edceed1.mount: Deactivated successfully. Feb 23 18:05:41 ip-10-0-136-68 systemd[1]: run-netns-50cd8d01\x2df216\x2d4b74\x2d86a1\x2d639d6edceed1.mount: Deactivated successfully. Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.263324615Z" level=info msg="runSandbox: deleting pod ID 33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42 from idIndex" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.263361258Z" level=info msg="runSandbox: removing pod sandbox 33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.263405295Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.263427019Z" level=info msg="runSandbox: unmounting shmPath for sandbox 33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42-userdata-shm.mount: Deactivated successfully. Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.271311479Z" level=info msg="runSandbox: removing pod sandbox from storage: 33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.272719346Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:41.272746467Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=305b042c-5536-4aa9-8bc8-f91d30a4add9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:41.272902 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:05:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:41.272947 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:05:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:41.272973 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:05:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:41.273025 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(33f10ea8e8159b85cdc0803e45368140c8b30aa94bfd0a80832d5aad0941ab42): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:05:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:43.217037 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:43.217369811Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:43.217434435Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:43.223604054Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/71b3a25a-5647-4a10-acf8-7d6659bcb57f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:43.223627264Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:05:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:45.217317 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:05:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:45.217724 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:05:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:47.216483 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:47.216862684Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:47.216920511Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:47.222374345Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/21b20b69-731d-4fa8-9eb1-37c6ed93c041 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:47.222403433Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:05:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:48.217220 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:05:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:48.217736966Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:48.217808255Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:05:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:48.223596001Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/216e0b8d-6605-4948-b144-569885dac671 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:05:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:48.223630390Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:05:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:51.216806 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:05:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:51.217202492Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:51.217279603Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:05:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:51.222668433Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/645f58dc-db8d-407a-90f3-ff64fdda838a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:05:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:51.222703151Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:05:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:54.217348 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:05:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:54.217699473Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:05:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:54.217759873Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:05:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:54.223180441Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/22cc10e3-870b-4d87-aa23-0082566a0999 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:05:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:05:54.223214525Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:56.292345 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:56.292629 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:56.292827 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:56.292851 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:05:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:05:58.216612 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:05:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:05:58.217155 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:06:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:06:12.217287 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:06:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:12.217819 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:13.085911506Z" level=info msg="cleanup sandbox network" Feb 23 18:06:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:22.217081 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:22.217372 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:22.217575 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:22.217608 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:06:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:06:26.216817 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:26.217491 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:26.292497 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:26.292805 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:26.293052 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:26.293087 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:28.235997698Z" level=info msg="NetworkStart: stopping network for sandbox 5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:28.236125745Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/71b3a25a-5647-4a10-acf8-7d6659bcb57f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:28.236159497Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:28.236166343Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:28.236173386Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:06:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:32.233892131Z" level=info msg="NetworkStart: stopping network for sandbox 7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:06:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:32.234011334Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/21b20b69-731d-4fa8-9eb1-37c6ed93c041 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:06:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:32.234051088Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:06:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:32.234062959Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:06:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:32.234072679Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:33.235833800Z" level=info msg="NetworkStart: stopping network for sandbox d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:33.235967678Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/216e0b8d-6605-4948-b144-569885dac671 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:33.236009319Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:33.236020429Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:33.236031040Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:06:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:36.235707718Z" level=info msg="NetworkStart: stopping network for sandbox 1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:06:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:36.235812661Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/645f58dc-db8d-407a-90f3-ff64fdda838a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:06:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:36.235839149Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:06:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:36.235847341Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:06:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:36.235856890Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:06:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:39.237341666Z" level=info msg="NetworkStart: stopping network for sandbox c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:06:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:39.237395816Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:06:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:06:39.237593932Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:06:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:06:41.217138 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:06:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:41.217753 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:06:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:06:54.217069 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:06:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:54.217707 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:56.291625 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:56.291896 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:56.292089 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:06:56.292112 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:07:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:09.216785 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:07:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:09.217154 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.086869236Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.086939468Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/22cc10e3-870b-4d87-aa23-0082566a0999 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.086998731Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.087010059Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.087018584Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.246419890Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.246473639Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 systemd[1]: run-utsns-71b3a25a\x2d5647\x2d4a10\x2dacf8\x2d7d6659bcb57f.mount: Deactivated successfully. Feb 23 18:07:13 ip-10-0-136-68 systemd[1]: run-ipcns-71b3a25a\x2d5647\x2d4a10\x2dacf8\x2d7d6659bcb57f.mount: Deactivated successfully. Feb 23 18:07:13 ip-10-0-136-68 systemd[1]: run-netns-71b3a25a\x2d5647\x2d4a10\x2dacf8\x2d7d6659bcb57f.mount: Deactivated successfully. Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.275342305Z" level=info msg="runSandbox: deleting pod ID 5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e from idIndex" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.275437225Z" level=info msg="runSandbox: removing pod sandbox 5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.275472615Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.275492719Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e-userdata-shm.mount: Deactivated successfully. Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.280307546Z" level=info msg="runSandbox: removing pod sandbox from storage: 5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.281920400Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:13.281949747Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=54b1cb9c-7a36-443e-84d7-52ee8722fb3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:13.282191 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:07:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:13.282282 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:07:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:13.282322 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:07:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:13.282382 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5724049af8946415221935828e63350af6b6e90d91442b3dd85161a139d2fd4e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.244437466Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.244489920Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 systemd[1]: run-utsns-21b20b69\x2d731d\x2d4fa8\x2d9eb1\x2d37c6ed93c041.mount: Deactivated successfully. Feb 23 18:07:17 ip-10-0-136-68 systemd[1]: run-ipcns-21b20b69\x2d731d\x2d4fa8\x2d9eb1\x2d37c6ed93c041.mount: Deactivated successfully. Feb 23 18:07:17 ip-10-0-136-68 systemd[1]: run-netns-21b20b69\x2d731d\x2d4fa8\x2d9eb1\x2d37c6ed93c041.mount: Deactivated successfully. Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.274354723Z" level=info msg="runSandbox: deleting pod ID 7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6 from idIndex" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.274400725Z" level=info msg="runSandbox: removing pod sandbox 7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.274449026Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.274464754Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6-userdata-shm.mount: Deactivated successfully. Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.279320011Z" level=info msg="runSandbox: removing pod sandbox from storage: 7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.280897838Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:17.280932301Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=c7860c7c-4c52-4c77-a76c-ff5a3a8e08f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:17.281156 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:07:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:17.281230 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:07:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:17.281286 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:07:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:17.281342 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(7677ce22e92c1747a650619987327a75b08f173803f9e3baea04a804215194e6): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.245957801Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.246008144Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 systemd[1]: run-utsns-216e0b8d\x2d6605\x2d4948\x2db144\x2d569885dac671.mount: Deactivated successfully. Feb 23 18:07:18 ip-10-0-136-68 systemd[1]: run-ipcns-216e0b8d\x2d6605\x2d4948\x2db144\x2d569885dac671.mount: Deactivated successfully. Feb 23 18:07:18 ip-10-0-136-68 systemd[1]: run-netns-216e0b8d\x2d6605\x2d4948\x2db144\x2d569885dac671.mount: Deactivated successfully. Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.273342406Z" level=info msg="runSandbox: deleting pod ID d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc from idIndex" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.273388886Z" level=info msg="runSandbox: removing pod sandbox d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.273433434Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.273451335Z" level=info msg="runSandbox: unmounting shmPath for sandbox d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc-userdata-shm.mount: Deactivated successfully. Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.280311598Z" level=info msg="runSandbox: removing pod sandbox from storage: d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.281793999Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:18.281824938Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=f41e2bfb-2c32-495e-9213-f5b8194ef199 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:18.282056 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:07:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:18.282118 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:07:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:18.282140 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:07:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:18.282219 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d52bc30bfb8b76ce17fcd60b600ab45e109a6d9e4b0e936e0a4c7d7d1a8502dc): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:07:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:20.217485 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:07:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:20.218067 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.245381663Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.245432849Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 systemd[1]: run-utsns-645f58dc\x2ddb8d\x2d407a\x2d90f3\x2dff64fdda838a.mount: Deactivated successfully. Feb 23 18:07:21 ip-10-0-136-68 systemd[1]: run-ipcns-645f58dc\x2ddb8d\x2d407a\x2d90f3\x2dff64fdda838a.mount: Deactivated successfully. Feb 23 18:07:21 ip-10-0-136-68 systemd[1]: run-netns-645f58dc\x2ddb8d\x2d407a\x2d90f3\x2dff64fdda838a.mount: Deactivated successfully. Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.271367903Z" level=info msg="runSandbox: deleting pod ID 1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88 from idIndex" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.271407950Z" level=info msg="runSandbox: removing pod sandbox 1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.271444123Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.271458910Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88-userdata-shm.mount: Deactivated successfully. Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.277321456Z" level=info msg="runSandbox: removing pod sandbox from storage: 1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.278879180Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:21.278913692Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=602aedf8-568a-4709-80d6-bafecf218f4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:21.279098 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:07:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:21.279156 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:07:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:21.279178 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:07:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:21.279228 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a2ea2f7f9a12c2bff66430d95433f29b5b9409d23be5479df1137bc7189db88): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:26.291766 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:26.292024 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:26.292278 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:26.292313 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:07:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:27.217302 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:07:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:27.217741342Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:27.217797260Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:07:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:27.225764265Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/3b35a249-0e9b-4b7e-8e35-cc6b02cf75e1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:07:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:27.225806459Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:07:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:31.217310 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:07:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:31.217452 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:31.217611323Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:31.217683517Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:07:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:31.218903 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:31.225413315Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2572652d-9fd2-46a3-9b03-d25628254507 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:31.225453357Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:07:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:32.217405 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:07:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:32.217846648Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:32.217923990Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:07:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:32.223369807Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/0d1289ad-6a7f-4e59-99a9-80c505e9d623 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:07:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:32.223395607Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:07:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:33.217152 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:07:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:33.217606879Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:33.217682372Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:07:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:33.223367895Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/aac58bce-9b97-4a78-9911-d30a2b2e2674 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:07:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:33.223402265Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.238394476Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.238448441Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 systemd[1]: run-utsns-22cc10e3\x2d870b\x2d4d87\x2daa23\x2d0082566a0999.mount: Deactivated successfully. Feb 23 18:07:39 ip-10-0-136-68 systemd[1]: run-ipcns-22cc10e3\x2d870b\x2d4d87\x2daa23\x2d0082566a0999.mount: Deactivated successfully. Feb 23 18:07:39 ip-10-0-136-68 systemd[1]: run-netns-22cc10e3\x2d870b\x2d4d87\x2daa23\x2d0082566a0999.mount: Deactivated successfully. Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.263329823Z" level=info msg="runSandbox: deleting pod ID c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0 from idIndex" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.263369101Z" level=info msg="runSandbox: removing pod sandbox c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.263401021Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.263415098Z" level=info msg="runSandbox: unmounting shmPath for sandbox c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0-userdata-shm.mount: Deactivated successfully. Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.271314242Z" level=info msg="runSandbox: removing pod sandbox from storage: c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.272980710Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:39.273010134Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=c58dd3bc-3604-4c6a-97fa-7e5fd68a2942 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:39.273238 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:07:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:39.273322 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:07:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:39.273344 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:07:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:39.273399 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c88dc3e527c888c17b705a069cfcd76d45283bdc9741aa1d48524e52bf61d9a0): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:07:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:41.217078 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:41.217359 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:41.217582 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:41.217605 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:07:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:44.216909 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:07:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:44.217487 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:07:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:53.216605 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:53.216998674Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:53.217065089Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:53.222290915Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/b670c3a6-b916-4c5e-90ed-beaf19136950 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:07:53.222317307Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:56.291998 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:56.292304 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:56.292537 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:56.292570 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:07:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:07:58.217491 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:07:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:07:58.218470 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:08:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:12.237807011Z" level=info msg="NetworkStart: stopping network for sandbox 87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:12.237926446Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/3b35a249-0e9b-4b7e-8e35-cc6b02cf75e1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:08:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:12.237966081Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:08:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:12.237977014Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:08:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:12.237987134Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:08:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:08:13.216699 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:08:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:13.217094 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:16.237045639Z" level=info msg="NetworkStart: stopping network for sandbox 84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:16.237176789Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2572652d-9fd2-46a3-9b03-d25628254507 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:16.237218026Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:16.237233192Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:16.237283423Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:08:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:17.235738301Z" level=info msg="NetworkStart: stopping network for sandbox a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:17.235876192Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/0d1289ad-6a7f-4e59-99a9-80c505e9d623 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:08:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:17.235912526Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:08:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:17.235923675Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:08:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:17.235933751Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:08:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:18.234774431Z" level=info msg="NetworkStart: stopping network for sandbox 597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:18.234885128Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/aac58bce-9b97-4a78-9911-d30a2b2e2674 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:08:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:18.234917798Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:08:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:18.234925069Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:08:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:18.234931697Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:26.291927 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:26.292143 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:26.292373 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:26.292396 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:08:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:08:27.216945 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.217687007Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=174b479c-ea16-4d7c-9e04-ad42efd39a83 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.217872579Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=174b479c-ea16-4d7c-9e04-ad42efd39a83 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.218528026Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=4626ea0e-6469-40a7-8d95-023ba8d242c0 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.218703840Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=4626ea0e-6469-40a7-8d95-023ba8d242c0 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.219364220Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=82d8dd14-ebf5-4d56-84c0-6b0f82f70ab5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.219469745Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:08:27 ip-10-0-136-68 systemd[1]: Started crio-conmon-a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477.scope. Feb 23 18:08:27 ip-10-0-136-68 systemd[1]: Started libcontainer container a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477. Feb 23 18:08:27 ip-10-0-136-68 conmon[6143]: conmon a4dc7377b1daa58ce753 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:08:27 ip-10-0-136-68 systemd[1]: crio-conmon-a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477.scope: Deactivated successfully. Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.366223908Z" level=info msg="Created container a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=82d8dd14-ebf5-4d56-84c0-6b0f82f70ab5 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.366900913Z" level=info msg="Starting container: a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477" id=f0a009dc-d234-4aae-beaa-a79147cb8232 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:08:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:27.374051125Z" level=info msg="Started container" PID=6155 containerID=a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=f0a009dc-d234-4aae-beaa-a79147cb8232 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:08:27 ip-10-0-136-68 systemd[1]: crio-a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477.scope: Deactivated successfully. Feb 23 18:08:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:31.196982422Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=1877e407-ef55-4202-a917-134b441e00f0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:08:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:08:31.197702 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477} Feb 23 18:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:38.233861395Z" level=info msg="NetworkStart: stopping network for sandbox 0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:38.233988951Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/b670c3a6-b916-4c5e-90ed-beaf19136950 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:38.234027751Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:38.234039357Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:38.234052029Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:08:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:08:44.872999 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:08:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:08:44.873194 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:08:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:50.396753123Z" level=info msg="cleanup sandbox network" Feb 23 18:08:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:08:54.873000 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:08:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:08:54.873065 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:56.292553 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:56.292839 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:56.293089 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:56.293116 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.248105871Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.248152194Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 systemd[1]: run-utsns-3b35a249\x2d0e9b\x2d4b7e\x2d8e35\x2dcc6b02cf75e1.mount: Deactivated successfully. Feb 23 18:08:57 ip-10-0-136-68 systemd[1]: run-ipcns-3b35a249\x2d0e9b\x2d4b7e\x2d8e35\x2dcc6b02cf75e1.mount: Deactivated successfully. Feb 23 18:08:57 ip-10-0-136-68 systemd[1]: run-netns-3b35a249\x2d0e9b\x2d4b7e\x2d8e35\x2dcc6b02cf75e1.mount: Deactivated successfully. Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.264381865Z" level=info msg="runSandbox: deleting pod ID 87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f from idIndex" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.264423847Z" level=info msg="runSandbox: removing pod sandbox 87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.264466541Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.264486281Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f-userdata-shm.mount: Deactivated successfully. Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.276334565Z" level=info msg="runSandbox: removing pod sandbox from storage: 87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.277934216Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:08:57.277963983Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=f5e9bee3-b810-449a-8fed-df286905d65f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:08:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:57.278202 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:08:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:57.278292 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:08:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:57.278329 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:08:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:08:57.278414 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87590805d5a413b2cc3cfc46dc7026f0817d5820aebe77b08e749b51534e545f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:01.217451 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:01.217726 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:01.217950 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:01.217982 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.246693074Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.246739441Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 systemd[1]: run-utsns-2572652d\x2d9fd2\x2d46a3\x2d9b03\x2dd25628254507.mount: Deactivated successfully. Feb 23 18:09:01 ip-10-0-136-68 systemd[1]: run-ipcns-2572652d\x2d9fd2\x2d46a3\x2d9b03\x2dd25628254507.mount: Deactivated successfully. Feb 23 18:09:01 ip-10-0-136-68 systemd[1]: run-netns-2572652d\x2d9fd2\x2d46a3\x2d9b03\x2dd25628254507.mount: Deactivated successfully. Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.265311853Z" level=info msg="runSandbox: deleting pod ID 84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9 from idIndex" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.265341639Z" level=info msg="runSandbox: removing pod sandbox 84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.265364741Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.265377626Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9-userdata-shm.mount: Deactivated successfully. Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.272311969Z" level=info msg="runSandbox: removing pod sandbox from storage: 84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.273878061Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:01.273910719Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=05339519-a5ff-4e20-8962-ce7983f35dd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:01.274080 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:01.274131 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:01.274153 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:01.274210 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(84e8e17de3085c20d49be222ce5105b5faeb0131f6927a7b820081b1adf2f7f9): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.244908221Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.244952046Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 systemd[1]: run-utsns-0d1289ad\x2d6a7f\x2d4e59\x2d99a9\x2d80c505e9d623.mount: Deactivated successfully. Feb 23 18:09:02 ip-10-0-136-68 systemd[1]: run-ipcns-0d1289ad\x2d6a7f\x2d4e59\x2d99a9\x2d80c505e9d623.mount: Deactivated successfully. Feb 23 18:09:02 ip-10-0-136-68 systemd[1]: run-netns-0d1289ad\x2d6a7f\x2d4e59\x2d99a9\x2d80c505e9d623.mount: Deactivated successfully. Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.271330673Z" level=info msg="runSandbox: deleting pod ID a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d from idIndex" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.271370047Z" level=info msg="runSandbox: removing pod sandbox a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.271412432Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.271433109Z" level=info msg="runSandbox: unmounting shmPath for sandbox a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d-userdata-shm.mount: Deactivated successfully. Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.277298698Z" level=info msg="runSandbox: removing pod sandbox from storage: a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.278780180Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:02.278812408Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=51b70922-92a2-4a5e-923d-00c0a2e1c02e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:02.278968 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:09:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:02.279017 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:09:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:02.279051 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:09:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:02.279105 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a841a8cb0c35331e832faee4716039d442ef907521edc875d754d8ba788cbf7d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.243613535Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.243672282Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 systemd[1]: run-utsns-aac58bce\x2d9b97\x2d4a78\x2d9911\x2dd30a2b2e2674.mount: Deactivated successfully. Feb 23 18:09:03 ip-10-0-136-68 systemd[1]: run-ipcns-aac58bce\x2d9b97\x2d4a78\x2d9911\x2dd30a2b2e2674.mount: Deactivated successfully. Feb 23 18:09:03 ip-10-0-136-68 systemd[1]: run-netns-aac58bce\x2d9b97\x2d4a78\x2d9911\x2dd30a2b2e2674.mount: Deactivated successfully. Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.270327585Z" level=info msg="runSandbox: deleting pod ID 597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1 from idIndex" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.270363752Z" level=info msg="runSandbox: removing pod sandbox 597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.270407232Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.270426927Z" level=info msg="runSandbox: unmounting shmPath for sandbox 597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1-userdata-shm.mount: Deactivated successfully. Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.276302845Z" level=info msg="runSandbox: removing pod sandbox from storage: 597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.277844380Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:03.277877711Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d7991829-d978-4067-b3e9-eee024774adb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:03.278070 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:09:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:03.278131 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:09:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:03.278165 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:09:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:03.278266 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(597273f9b2ee1587b93c4dff9e48531b1556450345cec0b7234e8af8f0d6a0e1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:09:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:04.873104 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:09:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:04.873161 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:09:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:08.216510 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:09:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:08.216937445Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:08.216995462Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:09:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:08.223123528Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/798c88d7-e204-4393-bb60-b6fa5200d222 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:08.223158292Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:09:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:13.216612 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:09:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:13.217013143Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:13.217087577Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:09:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:13.222448118Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/9144bf6a-5dc4-4b53-a759-6c3e8f056e68 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:13.222475235Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:09:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:14.216834 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:14.217181705Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:14.217271119Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:14.222621646Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/c0b530b9-ed3e-432c-ba46-07fc4cb9c01c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:14.222655334Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:09:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:14.872464 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:09:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:14.872522 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:09:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:18.217503 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:09:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:18.217944973Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:18.218008296Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:09:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:18.223510143Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2c6d1b72-b368-4a76-92a0-52b96802dff4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:18.223546789Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.244478613Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.244534933Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.244597782Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.244775850Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:09:23 ip-10-0-136-68 systemd[1]: run-utsns-b670c3a6\x2db916\x2d4c5e\x2d90ed\x2dbeaf19136950.mount: Deactivated successfully. Feb 23 18:09:23 ip-10-0-136-68 systemd[1]: run-ipcns-b670c3a6\x2db916\x2d4c5e\x2d90ed\x2dbeaf19136950.mount: Deactivated successfully. Feb 23 18:09:23 ip-10-0-136-68 systemd[1]: run-netns-b670c3a6\x2db916\x2d4c5e\x2d90ed\x2dbeaf19136950.mount: Deactivated successfully. Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.264344798Z" level=info msg="runSandbox: deleting pod ID 0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502 from idIndex" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.264387289Z" level=info msg="runSandbox: removing pod sandbox 0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.264439567Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.264463410Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502-userdata-shm.mount: Deactivated successfully. Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.270308371Z" level=info msg="runSandbox: removing pod sandbox from storage: 0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.272167535Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:23.272196341Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=ef305891-85cb-442b-a98c-80f37497ede6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:23.272458 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:23.272519 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:23.272544 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:23.272609 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0b4ab0b4615ae4ec1e38139f7f2cea4fba730126a3c712d02b4485da02dfa502): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:09:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:24.872392 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:09:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:24.872458 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:09:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:24.872487 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:09:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:24.872997 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:09:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:24.873160 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477" gracePeriod=30 Feb 23 18:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:24.873395820Z" level=info msg="Stopping container: a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477 (timeout: 30s)" id=a6de8de0-1c7d-4637-8f6f-f34f47f3cc68 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:26.291902 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:26.292197 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:26.292471 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:26.292500 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:09:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:28.634996476Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=a6de8de0-1c7d-4637-8f6f-f34f47f3cc68 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:09:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-bb1ffded08cfb4cc10da3f8a92aa3983abcd2d529f0037fc06b4fa808f351e26-merged.mount: Deactivated successfully. Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.416912810Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=a6de8de0-1c7d-4637-8f6f-f34f47f3cc68 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.419533484Z" level=info msg="Stopped container a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=a6de8de0-1c7d-4637-8f6f-f34f47f3cc68 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.420262784Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=b5591a85-ae7d-485f-b841-a16f436faf88 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.420444722Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=b5591a85-ae7d-485f-b841-a16f436faf88 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.421045343Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=dd2dd5ff-c3e9-4a32-9be0-8ff1753e6f0f name=/runtime.v1.ImageService/ImageStatus Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.421178864Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=dd2dd5ff-c3e9-4a32-9be0-8ff1753e6f0f name=/runtime.v1.ImageService/ImageStatus Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.421793631Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=9db3b6fa-19c6-4e67-b7f2-0608ab5fed21 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.421903110Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:09:32 ip-10-0-136-68 systemd[1]: Started crio-conmon-11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03.scope. Feb 23 18:09:32 ip-10-0-136-68 systemd[1]: Started libcontainer container 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03. Feb 23 18:09:32 ip-10-0-136-68 conmon[6318]: conmon 11443380605b661a4dfe : Failed to write to cgroup.event_control Operation not supported Feb 23 18:09:32 ip-10-0-136-68 systemd[1]: crio-conmon-11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03.scope: Deactivated successfully. Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.569547564Z" level=info msg="Created container 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=9db3b6fa-19c6-4e67-b7f2-0608ab5fed21 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.570074311Z" level=info msg="Starting container: 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" id=3856f502-63c1-4ca8-9c30-50d2cb23e7a9 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:09:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:32.588544997Z" level=info msg="Started container" PID=6330 containerID=11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=3856f502-63c1-4ca8-9c30-50d2cb23e7a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:09:32 ip-10-0-136-68 systemd[1]: crio-11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03.scope: Deactivated successfully. Feb 23 18:09:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:33.025893603Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=521d20ea-b127-4cca-93e5-7ac0c2362aca name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:09:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:34.216812 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:09:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:34.217234345Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:34.217320667Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:09:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:36.775144647Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=511192fa-86f7-4117-a21f-7369419fa087 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:09:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:36.776126 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477" exitCode=-1 Feb 23 18:09:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:36.776162 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477} Feb 23 18:09:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:36.776190 2199 scope.go:115] "RemoveContainer" containerID="d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" Feb 23 18:09:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:40.536067926Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=2c49f585-8547-4fec-a85c-9182afcd7951 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:09:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:41.516873300Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=e1d7ee05-3fb3-47bb-994f-343ccea29cd8 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:09:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:44.286126631Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=63b60588-f037-462f-acac-518d138581c3 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:09:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:44.286716191Z" level=info msg="Removing container: d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5" id=6a52f5e4-43f8-4f13-9b3f-bc6907127542 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:09:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:45.266996772Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=57f9508e-362f-4d2f-8988-cd3d91548a16 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:09:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:45.267920 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03} Feb 23 18:09:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:48.046011211Z" level=warning msg="Failed to find container exit file for d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: timed out waiting for the condition" id=6a52f5e4-43f8-4f13-9b3f-bc6907127542 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:09:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:48.057747799Z" level=info msg="Removed container d0ae2fe5258fb4e0801d51dec6a3ffb1f2a8f021fea7db783baef8dc204055a5: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=6a52f5e4-43f8-4f13-9b3f-bc6907127542 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:09:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:50.397748468Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 18:09:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:50.398208469Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/a190488b-20ed-4bf6-a7f4-46a947081313 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:50.398269976Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:09:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:52.033969013Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=65f3c795-bbca-47fb-944b-13d786c09937 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:09:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:53.234512010Z" level=info msg="NetworkStart: stopping network for sandbox 45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:53.234632939Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/798c88d7-e204-4393-bb60-b6fa5200d222 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:53.234661125Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:09:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:53.234668772Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:09:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:53.234678197Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:09:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:54.872437 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:09:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:09:54.872502 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:56.291750 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:56.292046 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:56.292314 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:09:56.292341 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:58.235582629Z" level=info msg="NetworkStart: stopping network for sandbox e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:58.235723103Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/9144bf6a-5dc4-4b53-a759-6c3e8f056e68 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:58.235761187Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:58.235774138Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:58.235785133Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:09:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:59.234495526Z" level=info msg="NetworkStart: stopping network for sandbox 8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:09:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:59.234621747Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/c0b530b9-ed3e-432c-ba46-07fc4cb9c01c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:09:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:59.234650206Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:09:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:59.234660187Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:09:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:09:59.234668154Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:03.235019451Z" level=info msg="NetworkStart: stopping network for sandbox e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:03.235149667Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2c6d1b72-b368-4a76-92a0-52b96802dff4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:03.235182922Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:03.235193157Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:03.235199656Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:10:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:04.872777 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:10:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:04.872838 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:10:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:07.217670 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:07.218078 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:07.218378 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:07.218419 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:10:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:14.872188 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:10:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:14.872282 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:10:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:20.178645277Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=deb160c0-84a5-4e51-a02e-3332a6b1a06e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:10:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:20.178853813Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=deb160c0-84a5-4e51-a02e-3332a6b1a06e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:10:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:24.872712 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:10:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:24.872773 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:26.291786 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:26.292100 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:26.292346 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:26.292388 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:10:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:34.872330 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:10:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:34.872382 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:10:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:34.872411 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:10:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:34.872886 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:10:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:34.873052 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" gracePeriod=30 Feb 23 18:10:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:34.873316379Z" level=info msg="Stopping container: 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03 (timeout: 30s)" id=42c70441-6826-464b-af48-00c42adac137 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:10:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:35.409289021Z" level=info msg="NetworkStart: stopping network for sandbox c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:35.409415284Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/a190488b-20ed-4bf6-a7f4-46a947081313 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:10:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:35.409444060Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:10:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:35.409453001Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:10:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:35.409459938Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.243822013Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.243884013Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 systemd[1]: run-utsns-798c88d7\x2de204\x2d4393\x2dbb60\x2db6fa5200d222.mount: Deactivated successfully. Feb 23 18:10:38 ip-10-0-136-68 systemd[1]: run-ipcns-798c88d7\x2de204\x2d4393\x2dbb60\x2db6fa5200d222.mount: Deactivated successfully. Feb 23 18:10:38 ip-10-0-136-68 systemd[1]: run-netns-798c88d7\x2de204\x2d4393\x2dbb60\x2db6fa5200d222.mount: Deactivated successfully. Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.268321643Z" level=info msg="runSandbox: deleting pod ID 45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2 from idIndex" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.268358693Z" level=info msg="runSandbox: removing pod sandbox 45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.268391352Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.268408418Z" level=info msg="runSandbox: unmounting shmPath for sandbox 45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2-userdata-shm.mount: Deactivated successfully. Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.279346101Z" level=info msg="runSandbox: removing pod sandbox from storage: 45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.280904700Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.280933969Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=24ed7819-8b55-472d-9d77-0b6ea9e44911 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:38.281288 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:10:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:38.281356 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:10:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:38.281397 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:10:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:38.281483 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(45761f3981427489d0b40170bbb405243cad6e919de0c5394de4b250076a76a2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:10:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:38.633104535Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=42c70441-6826-464b-af48-00c42adac137 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:10:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-589aa8474cac0cfec2b1c597d0332548d19cc6d5dd3ae05f7abb47a553e81307-merged.mount: Deactivated successfully. Feb 23 18:10:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:42.430050705Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=42c70441-6826-464b-af48-00c42adac137 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:10:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:42.432491485Z" level=info msg="Stopped container 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=42c70441-6826-464b-af48-00c42adac137 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:10:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:42.433075 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:10:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:42.851089388Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=6ada5094-ca9e-462d-b3d2-378f254d72dc name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.245689456Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.245743214Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 systemd[1]: run-utsns-9144bf6a\x2d5dc4\x2d4b53\x2da759\x2d6c3e8f056e68.mount: Deactivated successfully. Feb 23 18:10:43 ip-10-0-136-68 systemd[1]: run-ipcns-9144bf6a\x2d5dc4\x2d4b53\x2da759\x2d6c3e8f056e68.mount: Deactivated successfully. Feb 23 18:10:43 ip-10-0-136-68 systemd[1]: run-netns-9144bf6a\x2d5dc4\x2d4b53\x2da759\x2d6c3e8f056e68.mount: Deactivated successfully. Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.268324327Z" level=info msg="runSandbox: deleting pod ID e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084 from idIndex" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.268364945Z" level=info msg="runSandbox: removing pod sandbox e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.268405100Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.268427519Z" level=info msg="runSandbox: unmounting shmPath for sandbox e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084-userdata-shm.mount: Deactivated successfully. Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.274325335Z" level=info msg="runSandbox: removing pod sandbox from storage: e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.275963974Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:43.275997476Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8ef91a46-897d-4f00-99f2-eb7dc873d553 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:43.276290 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:10:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:43.276355 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:10:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:43.276383 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:10:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:43.276437 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e69d625d2d42ca5a57fca7951f01e5b0127efcd8117adde13e4b1accf0485084): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.244693709Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.244743202Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 systemd[1]: run-utsns-c0b530b9\x2ded3e\x2d432c\x2dba46\x2d07fc4cb9c01c.mount: Deactivated successfully. Feb 23 18:10:44 ip-10-0-136-68 systemd[1]: run-ipcns-c0b530b9\x2ded3e\x2d432c\x2dba46\x2d07fc4cb9c01c.mount: Deactivated successfully. Feb 23 18:10:44 ip-10-0-136-68 systemd[1]: run-netns-c0b530b9\x2ded3e\x2d432c\x2dba46\x2d07fc4cb9c01c.mount: Deactivated successfully. Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.260324921Z" level=info msg="runSandbox: deleting pod ID 8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239 from idIndex" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.260359235Z" level=info msg="runSandbox: removing pod sandbox 8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.260393614Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.260413841Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239-userdata-shm.mount: Deactivated successfully. Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.277305283Z" level=info msg="runSandbox: removing pod sandbox from storage: 8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.278831050Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:44.278862542Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4fbd251a-b01f-42ad-b49b-766ce6819e11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:44.279037 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:10:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:44.279082 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:10:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:44.279103 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:10:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:44.279156 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8b867bce76d05368e746cb7ef8d337a262af286651221362f3167bf2965c6239): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:10:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:46.599956457Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=d7c2fda9-c36d-4009-8e75-524d18c0a01f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:10:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:46.600955 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" exitCode=-1 Feb 23 18:10:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:46.600998 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03} Feb 23 18:10:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:46.601029 2199 scope.go:115] "RemoveContainer" containerID="a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477" Feb 23 18:10:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:47.602664 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:10:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:47.603042 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.245415423Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.245468141Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 systemd[1]: run-utsns-2c6d1b72\x2db368\x2d4a76\x2d92a0\x2d52b96802dff4.mount: Deactivated successfully. Feb 23 18:10:48 ip-10-0-136-68 systemd[1]: run-ipcns-2c6d1b72\x2db368\x2d4a76\x2d92a0\x2d52b96802dff4.mount: Deactivated successfully. Feb 23 18:10:48 ip-10-0-136-68 systemd[1]: run-netns-2c6d1b72\x2db368\x2d4a76\x2d92a0\x2d52b96802dff4.mount: Deactivated successfully. Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.288335284Z" level=info msg="runSandbox: deleting pod ID e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210 from idIndex" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.288384857Z" level=info msg="runSandbox: removing pod sandbox e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.288422559Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.288441702Z" level=info msg="runSandbox: unmounting shmPath for sandbox e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210-userdata-shm.mount: Deactivated successfully. Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.292348705Z" level=info msg="runSandbox: removing pod sandbox from storage: e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.293846273Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:48.293880096Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=e42402e9-4224-4a85-a951-5da13a2fb78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:48.294073 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:10:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:48.294128 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:10:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:48.294157 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:10:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:48.294213 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e251877258999467f25cb9979efc21d9531e5b9657f84d99de570fa6db136210): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:10:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:50.217109 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:50.217556570Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:50.217611014Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:50.223127670Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/af87ea09-d9b3-4533-a5e2-a88e8df6a4dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:50.223151485Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:50.361990559Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=fc304350-9397-4354-aad3-77a05f197ba2 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:54.121966222Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=478eed58-0489-48d8-9cf2-01bb2e9f82b3 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:54.122529185Z" level=info msg="Removing container: a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477" id=e7bf5395-4513-4be4-98eb-5751c4ff67e4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:10:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:55.217448 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:55.217843123Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:55.217899673Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:55.223462397Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f4c84ae9-6c64-4b34-9844-6083bb426bf0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:55.223496778Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:56.292367 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:56.292643 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:56.292873 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:10:56.292902 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:10:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:57.216454 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:57.216886458Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:57.216950406Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:57.222193160Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4ca795ea-2bbd-4bf4-8ca4-fa9d7080e274 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:57.222226943Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:57.872047364Z" level=warning msg="Failed to find container exit file for a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: timed out waiting for the condition" id=e7bf5395-4513-4be4-98eb-5751c4ff67e4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:57.896657795Z" level=info msg="Removed container a4dc7377b1daa58ce7538f6693ee830e6113cefa391de02dd0a13e8d514d3477: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=e7bf5395-4513-4be4-98eb-5751c4ff67e4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:10:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:10:59.217192 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:59.217601504Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:59.217669003Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:59.223430767Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/20124175-f6a0-45a2-88f2-26395b37b2d8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:10:59.223464783Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:11:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:11:02.217271 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:11:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:02.217857 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:11:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:02.370958444Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=ab97f840-12f6-46f3-962e-c91d96d4f12c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:11:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:11:13.216399 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:11:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:13.216847 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:11:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:20.217453 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:20.217792 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:20.217996 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:20.218049 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.419650975Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.419732690Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 systemd[1]: run-utsns-a190488b\x2d20ed\x2d4bf6\x2da7f4\x2d46a947081313.mount: Deactivated successfully. Feb 23 18:11:20 ip-10-0-136-68 systemd[1]: run-ipcns-a190488b\x2d20ed\x2d4bf6\x2da7f4\x2d46a947081313.mount: Deactivated successfully. Feb 23 18:11:20 ip-10-0-136-68 systemd[1]: run-netns-a190488b\x2d20ed\x2d4bf6\x2da7f4\x2d46a947081313.mount: Deactivated successfully. Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.453335277Z" level=info msg="runSandbox: deleting pod ID c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9 from idIndex" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.453381138Z" level=info msg="runSandbox: removing pod sandbox c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.453422448Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.453441346Z" level=info msg="runSandbox: unmounting shmPath for sandbox c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9-userdata-shm.mount: Deactivated successfully. Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.458319265Z" level=info msg="runSandbox: removing pod sandbox from storage: c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.459854538Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:20.459887653Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=e5ca78e7-7a6e-40a6-8a87-2a9d3efa05de name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:20.460116 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:11:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:20.460181 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:11:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:20.460205 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:11:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:20.460310 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(c7e7147414edb3355914938381d5372dc50978cb8dee97fedf49714fee2931e9): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:26.292520 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:26.292744 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:26.292978 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:26.293008 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:11:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:11:27.216642 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:11:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:27.217101 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:11:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:11:32.216791 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:11:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:32.217226000Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:32.217328870Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:11:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:32.223011460Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/0bfd4e06-68a3-4c6d-ac11-f73d34acc43d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:11:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:32.223038267Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:35.235734587Z" level=info msg="NetworkStart: stopping network for sandbox 254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:35.235870203Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/af87ea09-d9b3-4533-a5e2-a88e8df6a4dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:35.235911724Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:35.235923744Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:35.235933724Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:11:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:40.234977405Z" level=info msg="NetworkStart: stopping network for sandbox ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:40.235149985Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f4c84ae9-6c64-4b34-9844-6083bb426bf0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:11:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:40.235188854Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:11:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:40.235200291Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:11:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:40.235209613Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:11:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:11:41.216401 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:11:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:41.216778 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:42.234089024Z" level=info msg="NetworkStart: stopping network for sandbox 9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:42.234200867Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4ca795ea-2bbd-4bf4-8ca4-fa9d7080e274 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:42.234228367Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:42.234237347Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:42.234270754Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:11:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:44.235392267Z" level=info msg="NetworkStart: stopping network for sandbox e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:11:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:44.235503735Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/20124175-f6a0-45a2-88f2-26395b37b2d8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:11:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:44.235551458Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:11:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:44.235560919Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:11:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:11:44.235568039Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:11:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:11:55.216804 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:11:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:55.217191 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:56.292083 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:56.292328 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:56.292534 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:11:56.292573 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:12:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:06.217480 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:12:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:06.219475 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:12:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:16.362897304Z" level=info msg="cleanup sandbox network" Feb 23 18:12:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:17.236470235Z" level=info msg="NetworkStart: stopping network for sandbox 91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:17.236519230Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:12:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:17.236668446Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:12:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:18.216714 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:12:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:18.217336 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.246311651Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.246359007Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 systemd[1]: run-utsns-af87ea09\x2dd9b3\x2d4533\x2da5e2\x2da88e8df6a4dc.mount: Deactivated successfully. Feb 23 18:12:20 ip-10-0-136-68 systemd[1]: run-ipcns-af87ea09\x2dd9b3\x2d4533\x2da5e2\x2da88e8df6a4dc.mount: Deactivated successfully. Feb 23 18:12:20 ip-10-0-136-68 systemd[1]: run-netns-af87ea09\x2dd9b3\x2d4533\x2da5e2\x2da88e8df6a4dc.mount: Deactivated successfully. Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.267319789Z" level=info msg="runSandbox: deleting pod ID 254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39 from idIndex" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.267355987Z" level=info msg="runSandbox: removing pod sandbox 254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.267383650Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.267399959Z" level=info msg="runSandbox: unmounting shmPath for sandbox 254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39-userdata-shm.mount: Deactivated successfully. Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.272318931Z" level=info msg="runSandbox: removing pod sandbox from storage: 254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.273956940Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:20.273989078Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=896f83ea-c581-40ff-8b56-4202556d7208 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:20.274206 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:12:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:20.274292 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:12:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:20.274334 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:12:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:20.274427 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(254d4b7b845de395b20ab42e7ee0dcb52c8ee5785e4d7167a56d33b216ed6f39): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.244358196Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.244406226Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 systemd[1]: run-utsns-f4c84ae9\x2d6c64\x2d4b34\x2d9844\x2d6083bb426bf0.mount: Deactivated successfully. Feb 23 18:12:25 ip-10-0-136-68 systemd[1]: run-ipcns-f4c84ae9\x2d6c64\x2d4b34\x2d9844\x2d6083bb426bf0.mount: Deactivated successfully. Feb 23 18:12:25 ip-10-0-136-68 systemd[1]: run-netns-f4c84ae9\x2d6c64\x2d4b34\x2d9844\x2d6083bb426bf0.mount: Deactivated successfully. Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.264318317Z" level=info msg="runSandbox: deleting pod ID ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae from idIndex" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.264354495Z" level=info msg="runSandbox: removing pod sandbox ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.264382047Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.264398618Z" level=info msg="runSandbox: unmounting shmPath for sandbox ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae-userdata-shm.mount: Deactivated successfully. Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.268296174Z" level=info msg="runSandbox: removing pod sandbox from storage: ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.269812888Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:25.269840843Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8e4baab2-e9e5-4130-8da7-e2e515da3ec5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:25.270070 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:12:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:25.270132 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:12:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:25.270153 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:12:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:25.270212 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ac18f96b6100249f4f0b9610036e7a1b2b3533158ea19d7b8388ab7d57d8f7ae): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:26.292347 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:26.292606 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:26.292843 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:26.292884 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.244452375Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.244496517Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 systemd[1]: run-utsns-4ca795ea\x2d2bbd\x2d4bf4\x2d8ca4\x2dfa9d7080e274.mount: Deactivated successfully. Feb 23 18:12:27 ip-10-0-136-68 systemd[1]: run-ipcns-4ca795ea\x2d2bbd\x2d4bf4\x2d8ca4\x2dfa9d7080e274.mount: Deactivated successfully. Feb 23 18:12:27 ip-10-0-136-68 systemd[1]: run-netns-4ca795ea\x2d2bbd\x2d4bf4\x2d8ca4\x2dfa9d7080e274.mount: Deactivated successfully. Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.264320841Z" level=info msg="runSandbox: deleting pod ID 9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582 from idIndex" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.264365252Z" level=info msg="runSandbox: removing pod sandbox 9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.264395755Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.264409174Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582-userdata-shm.mount: Deactivated successfully. Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.279341188Z" level=info msg="runSandbox: removing pod sandbox from storage: 9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.280885478Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:27.280918098Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=15e3f765-b141-4f86-95c2-eb2a2c6928e5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:27.281140 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:12:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:27.281197 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:12:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:27.281221 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:12:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:27.281323 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9edb01c65b62ee3454abf4b9e04c6cb76080ed6c3f987c8cb1abe11e39774582): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.245214519Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.245289050Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 systemd[1]: run-utsns-20124175\x2df6a0\x2d45a2\x2d88f2\x2d26395b37b2d8.mount: Deactivated successfully. Feb 23 18:12:29 ip-10-0-136-68 systemd[1]: run-ipcns-20124175\x2df6a0\x2d45a2\x2d88f2\x2d26395b37b2d8.mount: Deactivated successfully. Feb 23 18:12:29 ip-10-0-136-68 systemd[1]: run-netns-20124175\x2df6a0\x2d45a2\x2d88f2\x2d26395b37b2d8.mount: Deactivated successfully. Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.278335904Z" level=info msg="runSandbox: deleting pod ID e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109 from idIndex" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.278380399Z" level=info msg="runSandbox: removing pod sandbox e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.278414955Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.278436409Z" level=info msg="runSandbox: unmounting shmPath for sandbox e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109-userdata-shm.mount: Deactivated successfully. Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.290305372Z" level=info msg="runSandbox: removing pod sandbox from storage: e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.291726232Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:29.291755081Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4e32d137-4ba3-4709-9a33-1d2a263fa89f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:29.291958 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:12:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:29.292029 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:12:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:29.292071 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:12:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:29.292152 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e849f5ad6e6699ba4501a50c6c55d91c991b0d2d0fcd05a718f5b84da4c46109): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:12:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:32.217157 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:12:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:32.217779 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:12:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:35.217037 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:35.217434056Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:35.217494264Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:35.222972316Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/3a55f793-5630-4485-bb94-6e4fdecc18ab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:35.223011708Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:12:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:38.216386 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:12:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:38.216833226Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:38.216903672Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:12:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:38.222425550Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/6d265035-7f5d-4ec8-be50-d14a38a142c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:12:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:38.222458879Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:12:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:39.216844 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:12:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:39.217271065Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:39.217326601Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:12:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:39.222648257Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/88b2cda7-c43a-4d46-bf20-1319f11a6653 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:12:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:39.222676013Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:12:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:42.216695 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:42.217223810Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:42.217328262Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:42.223132191Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/da21da1e-b0ac-454c-ade9-c8eb48ba6722 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:12:42.223158039Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:12:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:43.216566 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:12:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:43.216948 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:12:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:45.217306 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:45.217672 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:45.217933 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:45.217970 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:12:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:12:56.217070 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:56.217710 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:56.292130 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:56.292449 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:56.292695 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:12:56.292722 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:13:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:02.247047294Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" Feb 23 18:13:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:02.247110959Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/0bfd4e06-68a3-4c6d-ac11-f73d34acc43d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:13:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:02.247150930Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:13:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:02.247157765Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:13:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:02.247166677Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:13:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:13:07.216564 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:13:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:07.217117 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.237231064Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.237309345Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 systemd[1]: run-utsns-0bfd4e06\x2d68a3\x2d4c6d\x2dac11\x2df73d34acc43d.mount: Deactivated successfully. Feb 23 18:13:17 ip-10-0-136-68 systemd[1]: run-ipcns-0bfd4e06\x2d68a3\x2d4c6d\x2dac11\x2df73d34acc43d.mount: Deactivated successfully. Feb 23 18:13:17 ip-10-0-136-68 systemd[1]: run-netns-0bfd4e06\x2d68a3\x2d4c6d\x2dac11\x2df73d34acc43d.mount: Deactivated successfully. Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.273363582Z" level=info msg="runSandbox: deleting pod ID 91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2 from idIndex" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.273414238Z" level=info msg="runSandbox: removing pod sandbox 91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.273451751Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.273469826Z" level=info msg="runSandbox: unmounting shmPath for sandbox 91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2-userdata-shm.mount: Deactivated successfully. Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.278308061Z" level=info msg="runSandbox: removing pod sandbox from storage: 91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.279992282Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:17.280022870Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=a9d93efe-60fa-4e7b-83ea-9889ead28d38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:17.280269 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:13:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:17.280391 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:13:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:17.280451 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:13:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:17.280540 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(91de25e69303944ea84080b0ff29048cf677e3b6a837b19dc6e4226f917556b2): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:13:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:13:20.216995 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:13:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:20.217567 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:20.234583113Z" level=info msg="NetworkStart: stopping network for sandbox 9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:20.234708476Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/3a55f793-5630-4485-bb94-6e4fdecc18ab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:20.234733469Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:20.234740779Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:20.234747264Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:23.235979192Z" level=info msg="NetworkStart: stopping network for sandbox 495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:23.236108350Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/6d265035-7f5d-4ec8-be50-d14a38a142c1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:23.236139118Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:23.236150003Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:23.236156764Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:13:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:24.234240240Z" level=info msg="NetworkStart: stopping network for sandbox d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:24.234382502Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/88b2cda7-c43a-4d46-bf20-1319f11a6653 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:13:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:24.234415277Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:13:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:24.234424883Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:13:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:24.234435100Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:26.292503 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:26.292802 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:26.293010 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:26.293040 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:27.234535085Z" level=info msg="NetworkStart: stopping network for sandbox 12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:27.234684006Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/da21da1e-b0ac-454c-ade9-c8eb48ba6722 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:27.234724694Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:27.234738180Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:27.234748646Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:13:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:13:31.216878 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:31.217365075Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:31.217435086Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:31.222456097Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ce944992-0165-4120-9fd8-b3fed088098b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:13:31.222480656Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:13:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:13:32.218209 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:13:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:32.220176 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:13:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:13:45.216538 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:13:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:45.216918 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:56.291999 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:56.292209 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:56.292408 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:56.292442 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:13:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:13:58.217017 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:13:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:13:58.217466 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.244628081Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.244705523Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 systemd[1]: run-utsns-3a55f793\x2d5630\x2d4485\x2dbb94\x2d6e4fdecc18ab.mount: Deactivated successfully. Feb 23 18:14:05 ip-10-0-136-68 systemd[1]: run-ipcns-3a55f793\x2d5630\x2d4485\x2dbb94\x2d6e4fdecc18ab.mount: Deactivated successfully. Feb 23 18:14:05 ip-10-0-136-68 systemd[1]: run-netns-3a55f793\x2d5630\x2d4485\x2dbb94\x2d6e4fdecc18ab.mount: Deactivated successfully. Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.276330955Z" level=info msg="runSandbox: deleting pod ID 9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca from idIndex" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.276375898Z" level=info msg="runSandbox: removing pod sandbox 9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.276427271Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.276447830Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca-userdata-shm.mount: Deactivated successfully. Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.283321425Z" level=info msg="runSandbox: removing pod sandbox from storage: 9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.284898938Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:05.284929324Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=9a774c3b-78d5-41f4-b9eb-fb81c22d1e87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:05.285153 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:14:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:05.285228 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:14:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:05.285287 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:14:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:05.285397 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9e3ddbe26a6b939bbadb034d2440989e7e43cea9c871ca5181b4ea748cfee5ca): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.246008958Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.246063263Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 systemd[1]: run-utsns-6d265035\x2d7f5d\x2d4ec8\x2dbe50\x2dd14a38a142c1.mount: Deactivated successfully. Feb 23 18:14:08 ip-10-0-136-68 systemd[1]: run-ipcns-6d265035\x2d7f5d\x2d4ec8\x2dbe50\x2dd14a38a142c1.mount: Deactivated successfully. Feb 23 18:14:08 ip-10-0-136-68 systemd[1]: run-netns-6d265035\x2d7f5d\x2d4ec8\x2dbe50\x2dd14a38a142c1.mount: Deactivated successfully. Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.282371807Z" level=info msg="runSandbox: deleting pod ID 495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055 from idIndex" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.282424299Z" level=info msg="runSandbox: removing pod sandbox 495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.282467962Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.282482157Z" level=info msg="runSandbox: unmounting shmPath for sandbox 495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055-userdata-shm.mount: Deactivated successfully. Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.287299589Z" level=info msg="runSandbox: removing pod sandbox from storage: 495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.288974797Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:08.289004979Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=9e5dac84-937c-47c4-a342-4d1f199eeb5f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:08.289219 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:14:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:08.289317 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:14:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:08.289346 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:14:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:08.289405 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(495c6ff9c923b2a0a2fde29cdde17f4d1ff4f006d2bab5e220e3804515064055): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.244127098Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.244177714Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 systemd[1]: run-utsns-88b2cda7\x2dc43a\x2d4d46\x2dbf20\x2d1319f11a6653.mount: Deactivated successfully. Feb 23 18:14:09 ip-10-0-136-68 systemd[1]: run-ipcns-88b2cda7\x2dc43a\x2d4d46\x2dbf20\x2d1319f11a6653.mount: Deactivated successfully. Feb 23 18:14:09 ip-10-0-136-68 systemd[1]: run-netns-88b2cda7\x2dc43a\x2d4d46\x2dbf20\x2d1319f11a6653.mount: Deactivated successfully. Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.268319503Z" level=info msg="runSandbox: deleting pod ID d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407 from idIndex" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.268359329Z" level=info msg="runSandbox: removing pod sandbox d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.268391837Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.268404842Z" level=info msg="runSandbox: unmounting shmPath for sandbox d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407-userdata-shm.mount: Deactivated successfully. Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.273322618Z" level=info msg="runSandbox: removing pod sandbox from storage: d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.274893548Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:09.274925803Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=ed5bc8a0-3b01-4eea-aa72-821eb34b4ff6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:09.275174 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:14:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:09.275232 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:14:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:09.275327 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:14:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:09.275386 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d6eb37e866b200098deee79a1c28a19da68decbe162c6262892963d147666407): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:14:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:14:10.217812 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:14:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:10.218423 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.244393058Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.244447071Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 systemd[1]: run-utsns-da21da1e\x2db0ac\x2d454c\x2dade9\x2dc8eb48ba6722.mount: Deactivated successfully. Feb 23 18:14:12 ip-10-0-136-68 systemd[1]: run-ipcns-da21da1e\x2db0ac\x2d454c\x2dade9\x2dc8eb48ba6722.mount: Deactivated successfully. Feb 23 18:14:12 ip-10-0-136-68 systemd[1]: run-netns-da21da1e\x2db0ac\x2d454c\x2dade9\x2dc8eb48ba6722.mount: Deactivated successfully. Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.276337315Z" level=info msg="runSandbox: deleting pod ID 12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f from idIndex" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.276381204Z" level=info msg="runSandbox: removing pod sandbox 12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.276422473Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.276437682Z" level=info msg="runSandbox: unmounting shmPath for sandbox 12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f-userdata-shm.mount: Deactivated successfully. Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.281337609Z" level=info msg="runSandbox: removing pod sandbox from storage: 12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.282819708Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:12.282849346Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=fc3d5e29-6715-4d81-899c-85731a0515ef name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:12.283067 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:14:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:12.283132 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:14:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:12.283156 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:14:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:12.283209 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(12dab4c8cb98ec4153d60a002915eb684941586e6090591b3fab966777633b6f): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:14:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:14.217822 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:14.218156 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:14.218423 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:14.218482 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:16.234135415Z" level=info msg="NetworkStart: stopping network for sandbox ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:16.234277020Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ce944992-0165-4120-9fd8-b3fed088098b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:16.234310302Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:16.234321373Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:16.234330845Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:14:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:14:19.216537 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:14:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:19.216942627Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:19.217007111Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:14:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:19.221979169Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/e7f3ddf8-1cf3-4d87-a5d2-971697081f15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:14:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:19.222005534Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:14:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:14:20.216680 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:14:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:20.217101512Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:20.217175262Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:14:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:20.223127817Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5fbc1666-5af8-40db-9461-4ce149783897 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:14:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:20.223150434Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:14:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:14:22.216911 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:22.217386697Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:22.217456355Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:22.223168738Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/90c6df9f-7eb6-43f2-894b-726ecf431893 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:22.223204927Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:14:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:14:24.216686 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:14:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:24.217106 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:14:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:14:25.216654 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:14:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:25.217024026Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:14:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:25.217092309Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:14:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:25.222767385Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/d3d0c811-502f-4221-af76-c67b90ae5178 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:14:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:14:25.222802779Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:26.292662 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:26.292951 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:26.293186 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:26.293214 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:14:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:14:36.217023 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:36.217600 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:14:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:14:51.217057 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:14:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:51.217497 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:56.292057 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:56.292340 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:56.292593 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:14:56.292621 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.243528760Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.243584200Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 systemd[1]: run-utsns-ce944992\x2d0165\x2d4120\x2d9fd8\x2db3fed088098b.mount: Deactivated successfully. Feb 23 18:15:01 ip-10-0-136-68 systemd[1]: run-ipcns-ce944992\x2d0165\x2d4120\x2d9fd8\x2db3fed088098b.mount: Deactivated successfully. Feb 23 18:15:01 ip-10-0-136-68 systemd[1]: run-netns-ce944992\x2d0165\x2d4120\x2d9fd8\x2db3fed088098b.mount: Deactivated successfully. Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.270343072Z" level=info msg="runSandbox: deleting pod ID ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3 from idIndex" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.270397436Z" level=info msg="runSandbox: removing pod sandbox ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.270427170Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.270441553Z" level=info msg="runSandbox: unmounting shmPath for sandbox ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3-userdata-shm.mount: Deactivated successfully. Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.277302494Z" level=info msg="runSandbox: removing pod sandbox from storage: ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.278869583Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:01.278897082Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=04f3cd1e-f24c-4b22-9e16-7ee79d5ed2cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:01.279110 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:15:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:01.279166 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:15:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:01.279191 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:15:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:01.279270 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ead5b9baab0fe3e300bb6d33fe0bdac7d8ac92f2bbfce91dcb9c5626c52db8d3): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:15:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:03.217291 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:15:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:03.217696 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:15:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:04.233725483Z" level=info msg="NetworkStart: stopping network for sandbox 9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:04.233858514Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/e7f3ddf8-1cf3-4d87-a5d2-971697081f15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:15:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:04.233899121Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:15:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:04.233910858Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:15:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:04.233922003Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:15:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:05.235316836Z" level=info msg="NetworkStart: stopping network for sandbox 3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:05.235444375Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5fbc1666-5af8-40db-9461-4ce149783897 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:15:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:05.235475365Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:15:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:05.235484295Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:15:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:05.235490780Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:15:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:07.234689600Z" level=info msg="NetworkStart: stopping network for sandbox bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:07.234819242Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/90c6df9f-7eb6-43f2-894b-726ecf431893 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:15:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:07.234852474Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:15:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:07.234860116Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:15:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:07.234867535Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:15:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:10.236295299Z" level=info msg="NetworkStart: stopping network for sandbox c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:10.236406847Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/d3d0c811-502f-4221-af76-c67b90ae5178 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:15:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:10.236434321Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:15:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:10.236443243Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:15:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:10.236449642Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:15:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:15.217275 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:15:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:15.217694481Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:15.217762878Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:15:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:15.223192514Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/c4ab549d-d58f-43a6-8376-d864f0a542f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:15:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:15.223230010Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:15:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:18.216599 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:15:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:18.217205 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:15:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:20.181837436Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=ce9a12e0-f723-4a3b-82dc-c593d1239552 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:20.182055858Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ce9a12e0-f723-4a3b-82dc-c593d1239552 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:23.216896 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:23.217188 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:23.217479 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:23.217515 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:26.292297 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:26.292553 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:26.292732 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:26.292753 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:15:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:31.216882 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:15:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:31.217324 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:15:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:44.217432 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.218304088Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=500a5414-fe98-4cfe-9815-e38a940e4b44 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.218570228Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=500a5414-fe98-4cfe-9815-e38a940e4b44 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.219215812Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=a6b1bd95-e33f-4a43-b577-59b4b8bc5049 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.219486269Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a6b1bd95-e33f-4a43-b577-59b4b8bc5049 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.220622846Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=13a3dc0d-6bc7-47c1-ba7d-0ebad008b564 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.220733870Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:15:44 ip-10-0-136-68 systemd[1]: Started crio-conmon-03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625.scope. Feb 23 18:15:44 ip-10-0-136-68 systemd[1]: Started libcontainer container 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625. Feb 23 18:15:44 ip-10-0-136-68 conmon[7012]: conmon 03e18dbeb66d29ad2790 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:15:44 ip-10-0-136-68 systemd[1]: crio-conmon-03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625.scope: Deactivated successfully. Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.372567230Z" level=info msg="Created container 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=13a3dc0d-6bc7-47c1-ba7d-0ebad008b564 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.373150862Z" level=info msg="Starting container: 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625" id=e3c40875-10ba-4d9f-a808-62220091680a name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:44.392518671Z" level=info msg="Started container" PID=7024 containerID=03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=e3c40875-10ba-4d9f-a808-62220091680a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:15:44 ip-10-0-136-68 systemd[1]: crio-03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625.scope: Deactivated successfully. Feb 23 18:15:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:48.152993242Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=e3c40875-10ba-4d9f-a808-62220091680a name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:15:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:48.554091867Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=bce7ecb6-3c35-49eb-8fff-633602dc8e16 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.244044430Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.244093622Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 systemd[1]: run-utsns-e7f3ddf8\x2d1cf3\x2d4d87\x2da5d2\x2d971697081f15.mount: Deactivated successfully. Feb 23 18:15:49 ip-10-0-136-68 systemd[1]: run-ipcns-e7f3ddf8\x2d1cf3\x2d4d87\x2da5d2\x2d971697081f15.mount: Deactivated successfully. Feb 23 18:15:49 ip-10-0-136-68 systemd[1]: run-netns-e7f3ddf8\x2d1cf3\x2d4d87\x2da5d2\x2d971697081f15.mount: Deactivated successfully. Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.270349855Z" level=info msg="runSandbox: deleting pod ID 9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140 from idIndex" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.270397943Z" level=info msg="runSandbox: removing pod sandbox 9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.270442655Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.270456626Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140-userdata-shm.mount: Deactivated successfully. Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.278297363Z" level=info msg="runSandbox: removing pod sandbox from storage: 9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.279935173Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:49.279964968Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=b3481269-895f-4272-ad25-a16e322c3346 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:49.280176 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:15:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:49.280235 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:15:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:49.280283 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:15:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:49.280348 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9448969b959ec032635f1d7e54a3555a6915c893feb5b3283ad4e80a2c390140): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.245332467Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.245385650Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 systemd[1]: run-utsns-5fbc1666\x2d5af8\x2d40db\x2d9461\x2d4ce149783897.mount: Deactivated successfully. Feb 23 18:15:50 ip-10-0-136-68 systemd[1]: run-ipcns-5fbc1666\x2d5af8\x2d40db\x2d9461\x2d4ce149783897.mount: Deactivated successfully. Feb 23 18:15:50 ip-10-0-136-68 systemd[1]: run-netns-5fbc1666\x2d5af8\x2d40db\x2d9461\x2d4ce149783897.mount: Deactivated successfully. Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.271328628Z" level=info msg="runSandbox: deleting pod ID 3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a from idIndex" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.271360808Z" level=info msg="runSandbox: removing pod sandbox 3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.271384686Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.271397029Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a-userdata-shm.mount: Deactivated successfully. Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.277308104Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.278807077Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:50.278835139Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=b70cb2d0-2476-47cf-9e6a-db95702c974f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:50.279015 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:15:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:50.279064 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:15:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:50.279088 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:15:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:50.279150 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ecc5cc2e9f3a8b2e5217f32e3866be32d2e632c9f157abc4bb0bbf45333613a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.244828529Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.244875187Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 systemd[1]: run-utsns-90c6df9f\x2d7eb6\x2d43f2\x2d894b\x2d726ecf431893.mount: Deactivated successfully. Feb 23 18:15:52 ip-10-0-136-68 systemd[1]: run-ipcns-90c6df9f\x2d7eb6\x2d43f2\x2d894b\x2d726ecf431893.mount: Deactivated successfully. Feb 23 18:15:52 ip-10-0-136-68 systemd[1]: run-netns-90c6df9f\x2d7eb6\x2d43f2\x2d894b\x2d726ecf431893.mount: Deactivated successfully. Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.263312812Z" level=info msg="runSandbox: deleting pod ID bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e from idIndex" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.263346865Z" level=info msg="runSandbox: removing pod sandbox bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.263370964Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.263385205Z" level=info msg="runSandbox: unmounting shmPath for sandbox bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e-userdata-shm.mount: Deactivated successfully. Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.282327490Z" level=info msg="runSandbox: removing pod sandbox from storage: bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.283974867Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:52.284008031Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9c618953-e9be-4194-9027-0121e42da8e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:52.284191 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:15:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:52.284260 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:15:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:52.284294 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:15:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:52.284350 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(bf1f36d53c30988f53cd580c2a7ecee7b6102c447a3176ef6f8fe5eb55effa7e): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:15:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:53.316891740Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=35645715-6b6b-4761-9f03-ff1f3eab54b0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.245967772Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.246015298Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 systemd[1]: run-utsns-d3d0c811\x2d502f\x2d4221\x2daf76\x2dc67b90ae5178.mount: Deactivated successfully. Feb 23 18:15:55 ip-10-0-136-68 systemd[1]: run-ipcns-d3d0c811\x2d502f\x2d4221\x2daf76\x2dc67b90ae5178.mount: Deactivated successfully. Feb 23 18:15:55 ip-10-0-136-68 systemd[1]: run-netns-d3d0c811\x2d502f\x2d4221\x2daf76\x2dc67b90ae5178.mount: Deactivated successfully. Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.274374095Z" level=info msg="runSandbox: deleting pod ID c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d from idIndex" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.274412291Z" level=info msg="runSandbox: removing pod sandbox c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.274445115Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.274459116Z" level=info msg="runSandbox: unmounting shmPath for sandbox c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d-userdata-shm.mount: Deactivated successfully. Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.292313063Z" level=info msg="runSandbox: removing pod sandbox from storage: c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.293831579Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:55.293869778Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=5863f961-e852-417e-8858-b74bc86c5358 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:15:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:55.294091 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:15:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:55.294298 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:15:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:55.294328 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:15:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:55.294384 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c0b892d72d9a0f7aedbd27e30014eeaa9c6ea28a8ecea61e446d409bb27ba78d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:56.292073 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:56.292364 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:56.292587 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:15:56.292622 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.077976273Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=b7643f9a-6b3e-4958-a5c1-1fd6ddb95a8b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:15:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:57.078994 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625" exitCode=-1 Feb 23 18:15:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:57.079036 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625} Feb 23 18:15:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:57.079071 2199 scope.go:115] "RemoveContainer" containerID="11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" Feb 23 18:15:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:15:57.079508 2199 scope.go:115] "RemoveContainer" containerID="03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625" Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.080203317Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=e0ac9539-722d-4f7a-9d77-2c06946be852 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.080421166Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=e0ac9539-722d-4f7a-9d77-2c06946be852 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.081017908Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=93f07c71-6cb7-4775-b5c6-55caaad1eac2 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.081278451Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=93f07c71-6cb7-4775-b5c6-55caaad1eac2 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.081974253Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=afd83986-7f59-42c3-b171-e2dd5ca9cb60 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.082090402Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:15:57 ip-10-0-136-68 systemd[1]: Started crio-conmon-1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69.scope. Feb 23 18:15:57 ip-10-0-136-68 systemd[1]: Started libcontainer container 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69. Feb 23 18:15:57 ip-10-0-136-68 conmon[7127]: conmon 1ce174eb052659672c70 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:15:57 ip-10-0-136-68 systemd[1]: crio-conmon-1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69.scope: Deactivated successfully. Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.206657601Z" level=info msg="Created container 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=afd83986-7f59-42c3-b171-e2dd5ca9cb60 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.207133315Z" level=info msg="Starting container: 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69" id=51745bee-dbb5-4417-916a-21a460bc2c76 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:15:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:15:57.213985445Z" level=info msg="Started container" PID=7139 containerID=1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=51745bee-dbb5-4417-916a-21a460bc2c76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:15:57 ip-10-0-136-68 systemd[1]: crio-1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69.scope: Deactivated successfully. Feb 23 18:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:00.235178320Z" level=info msg="NetworkStart: stopping network for sandbox 35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:00.235312739Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/c4ab549d-d58f-43a6-8376-d864f0a542f6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:00.235342424Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:00.235349354Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:00.235355906Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:00.839951931Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=734d41a8-8dc7-4650-8553-7cd1f9647dc2 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:01.818944783Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=0c29655c-7844-444d-a618-dbc8272edd53 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:16:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:04.217040 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:16:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:04.217588014Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:04.218225152Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:16:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:04.225802533Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ea126df8-84e6-427d-b8df-3af2846fc64d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:04.225828574Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:04.601953627Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=9151377c-7c97-4b92-8c8b-36265ea91b7b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:16:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:04.602453122Z" level=info msg="Removing container: 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03" id=59d0bca2-7262-460d-a777-6235b3ae58be name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:16:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:04.872451 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.557030042Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=99471072-210a-4cd5-bad8-33acf8727b7d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:16:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:05.557901 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69} Feb 23 18:16:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:05.558205 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:16:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:05.558401 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.558497912Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.558551809Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.558661897Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.558721644Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.565165602Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/79c15e98-7a8e-40e5-9007-77a15e481572 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.565190531Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.565714186Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/8ac3cb41-81ab-40b4-a71f-137d0423459b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:05.565741479Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:08.217819 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:16:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:08.218308383Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:08.218376172Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:16:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:08.224938708Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8dea6201-4e1a-40da-b2e2-7164a649d0bf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:08.224964729Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:08.363308674Z" level=warning msg="Failed to find container exit file for 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: timed out waiting for the condition" id=59d0bca2-7262-460d-a777-6235b3ae58be name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:16:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:08.376116993Z" level=info msg="Removed container 11443380605b661a4dfebb9ab08c55693d16e6187ab966f547576a47ecbaab03: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=59d0bca2-7262-460d-a777-6235b3ae58be name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:16:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:12.314027561Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=527017cf-0629-42e2-8d9e-809992c53034 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:16:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:14.872112 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:16:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:14.872172 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:16:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:24.872390 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:16:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:24.872472 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:26.292443 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:26.292770 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:26.292970 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:26.293002 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:16:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:30.217746 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:30.218350 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:30.218723 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:30.218788 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:16:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:34.872384 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:16:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:34.872453 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:16:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:40.318352 2199 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Feb 23 18:16:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:41.194225083Z" level=info msg="cleanup sandbox network" Feb 23 18:16:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:42.501168 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2fx68_ff7777c7-a1dc-413e-8da1-c4ba07527037/machine-config-daemon/1.log" Feb 23 18:16:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:44.872722 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:16:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:44.872783 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.244616388Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.244663822Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.244738082Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.245214450Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:45 ip-10-0-136-68 systemd[1]: run-utsns-c4ab549d\x2dd58f\x2d43a6\x2d8376\x2dd864f0a542f6.mount: Deactivated successfully. Feb 23 18:16:45 ip-10-0-136-68 systemd[1]: run-ipcns-c4ab549d\x2dd58f\x2d43a6\x2d8376\x2dd864f0a542f6.mount: Deactivated successfully. Feb 23 18:16:45 ip-10-0-136-68 systemd[1]: run-netns-c4ab549d\x2dd58f\x2d43a6\x2d8376\x2dd864f0a542f6.mount: Deactivated successfully. Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.263326319Z" level=info msg="runSandbox: deleting pod ID 35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913 from idIndex" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.263365713Z" level=info msg="runSandbox: removing pod sandbox 35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.263417375Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.263438715Z" level=info msg="runSandbox: unmounting shmPath for sandbox 35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913-userdata-shm.mount: Deactivated successfully. Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.268315045Z" level=info msg="runSandbox: removing pod sandbox from storage: 35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.269954921Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:45.269987825Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=417c2397-7e03-4c49-acb6-a816cbfa6bfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:45.270216 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:16:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:45.270398 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:16:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:45.270440 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:16:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:45.270528 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35f2c7aa7c36abf3ec23fd4fdcf153f5c120ab5890685fd8c4a7f1cbfc668913): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:16:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:49.236989171Z" level=info msg="NetworkStart: stopping network for sandbox 4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:49.237122166Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ea126df8-84e6-427d-b8df-3af2846fc64d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:49.237161295Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:16:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:49.237172685Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:16:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:49.237182165Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.578918633Z" level=info msg="NetworkStart: stopping network for sandbox 224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.579054122Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/79c15e98-7a8e-40e5-9007-77a15e481572 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.579095687Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.579107533Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.579116472Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.580682908Z" level=info msg="NetworkStart: stopping network for sandbox 7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.580785399Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/8ac3cb41-81ab-40b4-a71f-137d0423459b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.580818573Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.580831814Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:16:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:50.580843396Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:53.236290920Z" level=info msg="NetworkStart: stopping network for sandbox a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:16:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:53.236429904Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8dea6201-4e1a-40da-b2e2-7164a649d0bf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:16:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:53.236461461Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:16:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:53.236471735Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:16:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:53.236477893Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:16:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:54.872943 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:16:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:54.872998 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:16:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:54.873026 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:16:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:54.873584 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:16:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:16:54.873754 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69" gracePeriod=30 Feb 23 18:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:54.873990626Z" level=info msg="Stopping container: 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69 (timeout: 30s)" id=d5e2db5d-3bde-456b-b370-3681c32f2bbc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:56.292203 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:56.292551 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:56.292760 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:16:56.292793 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:16:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:16:58.635142166Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=d5e2db5d-3bde-456b-b370-3681c32f2bbc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:16:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-82aadeea67c9ff0d974eaf65c3db1f4ecd93e10353cf47061df77cd8187721c9-merged.mount: Deactivated successfully. Feb 23 18:17:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:00.216887 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:17:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:00.217463808Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:00.217534048Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.417975668Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=d5e2db5d-3bde-456b-b370-3681c32f2bbc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.420881563Z" level=info msg="Stopped container 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d5e2db5d-3bde-456b-b370-3681c32f2bbc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.421601348Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=411b7331-c8d7-4750-af5e-44bfa3b64546 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.421771244Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=411b7331-c8d7-4750-af5e-44bfa3b64546 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.422330451Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=9be9e6e5-ca22-4513-9ff8-50722b316480 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.422489107Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=9be9e6e5-ca22-4513-9ff8-50722b316480 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.423108042Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=12adc27c-edd1-41f0-adfd-12312577cc87 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.423208314Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:17:02 ip-10-0-136-68 systemd[1]: Started crio-conmon-d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0.scope. Feb 23 18:17:02 ip-10-0-136-68 systemd[1]: Started libcontainer container d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0. Feb 23 18:17:02 ip-10-0-136-68 conmon[7385]: conmon d965241664ee7e506810 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:17:02 ip-10-0-136-68 systemd[1]: crio-conmon-d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0.scope: Deactivated successfully. Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.562539484Z" level=info msg="Created container d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=12adc27c-edd1-41f0-adfd-12312577cc87 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.562942810Z" level=info msg="Starting container: d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" id=c1b0d1ae-8791-47cb-a0be-4616d6809a21 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:17:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:02.581918937Z" level=info msg="Started container" PID=7397 containerID=d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=c1b0d1ae-8791-47cb-a0be-4616d6809a21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:17:02 ip-10-0-136-68 systemd[1]: crio-d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0.scope: Deactivated successfully. Feb 23 18:17:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:03.124804498Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=c6e5767b-e0b4-4b0c-a24e-0a5cfb97859b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:17:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:06.876409545Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=7781f226-cc4c-446c-8fca-59af150c2226 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:17:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:06.877357 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69" exitCode=-1 Feb 23 18:17:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:06.877455 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69} Feb 23 18:17:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:06.877561 2199 scope.go:115] "RemoveContainer" containerID="03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625" Feb 23 18:17:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:10.637086704Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=8866047a-451b-488b-89df-77053edb191d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:17:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:11.629625844Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=154e6b9a-0b64-4403-bb16-20006fd0d899 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:17:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:14.396988802Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=c6dc4118-5e9d-4c5c-bc2a-dda60cfe2521 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:17:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:14.397468039Z" level=info msg="Removing container: 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625" id=bcfddd22-cfcc-4709-b672-22e87191ab8a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:17:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:15.367966177Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=457a947f-bbc2-46b5-bcfd-ae6fdf2974ac name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:17:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:15.368971 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0} Feb 23 18:17:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:18.145033101Z" level=warning msg="Failed to find container exit file for 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: timed out waiting for the condition" id=bcfddd22-cfcc-4709-b672-22e87191ab8a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:17:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8cab09b421c850721cf431c7712c3455f82a268ed8c348ac786b7bfb03c8bbb9-merged.mount: Deactivated successfully. Feb 23 18:17:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:18.180604543Z" level=info msg="Removed container 03e18dbeb66d29ad27901ba4aeceb74fb07decd33751a629dcd89d8ad3d62625: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=bcfddd22-cfcc-4709-b672-22e87191ab8a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:17:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:18.898011 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2fx68_ff7777c7-a1dc-413e-8da1-c4ba07527037/machine-config-daemon/1.log" Feb 23 18:17:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:22.121989284Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=4dfd68c2-f3b2-4675-8cdd-8ed0f9c05433 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:17:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:24.872072 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:17:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:24.872132 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:26.292286 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:26.292531 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:26.292710 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:26.292733 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:17:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:30.255322908Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" Feb 23 18:17:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:30.255782806Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ea4d2483-ae75-4a00-beb0-3a92ac6f4d63 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:17:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:30.255815305Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:17:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:33.217531 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:33.217900 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:33.218211 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:33.218277 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.247124487Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.247176487Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 systemd[1]: run-utsns-ea126df8\x2d84e6\x2d427d\x2db8df\x2d3af2846fc64d.mount: Deactivated successfully. Feb 23 18:17:34 ip-10-0-136-68 systemd[1]: run-ipcns-ea126df8\x2d84e6\x2d427d\x2db8df\x2d3af2846fc64d.mount: Deactivated successfully. Feb 23 18:17:34 ip-10-0-136-68 systemd[1]: run-netns-ea126df8\x2d84e6\x2d427d\x2db8df\x2d3af2846fc64d.mount: Deactivated successfully. Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.268335623Z" level=info msg="runSandbox: deleting pod ID 4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7 from idIndex" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.268376961Z" level=info msg="runSandbox: removing pod sandbox 4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.268426721Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.268445422Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7-userdata-shm.mount: Deactivated successfully. Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.274314211Z" level=info msg="runSandbox: removing pod sandbox from storage: 4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.275866248Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:34.275895349Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4164d767-b760-4a51-b218-84924216158a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:34.276118 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:17:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:34.276188 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:17:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:34.276227 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:17:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:34.276335 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4f2010d5f1b97af8b39a32698cc7297a31f70c8a3001102435fbcf4570ab75c7): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:17:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:34.872559 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:17:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:34.872627 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.588780725Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.588825332Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.590918679Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.590949993Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 systemd[1]: run-utsns-79c15e98\x2d7a8e\x2d40e5\x2d9007\x2d77a15e481572.mount: Deactivated successfully. Feb 23 18:17:35 ip-10-0-136-68 systemd[1]: run-utsns-8ac3cb41\x2d81ab\x2d40b4\x2da71f\x2d137d0423459b.mount: Deactivated successfully. Feb 23 18:17:35 ip-10-0-136-68 systemd[1]: run-ipcns-8ac3cb41\x2d81ab\x2d40b4\x2da71f\x2d137d0423459b.mount: Deactivated successfully. Feb 23 18:17:35 ip-10-0-136-68 systemd[1]: run-ipcns-79c15e98\x2d7a8e\x2d40e5\x2d9007\x2d77a15e481572.mount: Deactivated successfully. Feb 23 18:17:35 ip-10-0-136-68 systemd[1]: run-netns-79c15e98\x2d7a8e\x2d40e5\x2d9007\x2d77a15e481572.mount: Deactivated successfully. Feb 23 18:17:35 ip-10-0-136-68 systemd[1]: run-netns-8ac3cb41\x2d81ab\x2d40b4\x2da71f\x2d137d0423459b.mount: Deactivated successfully. Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.621345624Z" level=info msg="runSandbox: deleting pod ID 7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729 from idIndex" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.621395455Z" level=info msg="runSandbox: removing pod sandbox 7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.621345968Z" level=info msg="runSandbox: deleting pod ID 224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc from idIndex" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.621441258Z" level=info msg="runSandbox: removing pod sandbox 224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.621464997Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.621479892Z" level=info msg="runSandbox: unmounting shmPath for sandbox 224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.621446069Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.621546760Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.626320322Z" level=info msg="runSandbox: removing pod sandbox from storage: 224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.626340872Z" level=info msg="runSandbox: removing pod sandbox from storage: 7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.627992164Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.628041781Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=60e80c28-335a-46f7-a273-e6bb353dc857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:35.628359 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:17:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:35.628432 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:17:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:35.628468 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:17:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:35.628891 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.629583507Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:35.629612498Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=a72acbab-2f0a-4556-80f2-d7bf452d5857 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:35.629771 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:17:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:35.629807 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:17:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:35.629829 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:17:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:35.629883 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:17:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7008b276cec38ddc180f5ef38e69ff767de7fbad9aadced6daafa595b62cf729-userdata-shm.mount: Deactivated successfully. Feb 23 18:17:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-224752ad08b42459eb95fbc7466cf5670a57b476321dcc871e08cc8202067bfc-userdata-shm.mount: Deactivated successfully. Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.246451060Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.246501593Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 systemd[1]: run-utsns-8dea6201\x2d4e1a\x2d40da\x2db2e2\x2d7164a649d0bf.mount: Deactivated successfully. Feb 23 18:17:38 ip-10-0-136-68 systemd[1]: run-ipcns-8dea6201\x2d4e1a\x2d40da\x2db2e2\x2d7164a649d0bf.mount: Deactivated successfully. Feb 23 18:17:38 ip-10-0-136-68 systemd[1]: run-netns-8dea6201\x2d4e1a\x2d40da\x2db2e2\x2d7164a649d0bf.mount: Deactivated successfully. Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.269316026Z" level=info msg="runSandbox: deleting pod ID a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9 from idIndex" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.269349031Z" level=info msg="runSandbox: removing pod sandbox a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.269376826Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.269390147Z" level=info msg="runSandbox: unmounting shmPath for sandbox a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9-userdata-shm.mount: Deactivated successfully. Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.288328108Z" level=info msg="runSandbox: removing pod sandbox from storage: a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.289806667Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:38.289833362Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d1cb51d0-c7ef-4e57-81d0-f8740bc15fca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:38.289995 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:17:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:38.290047 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:17:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:38.290068 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:17:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:38.290125 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a64b42018dde3d2b8eecf78975d039d0e793bf63d63ab2604029f3beeac08eb9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:17:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:44.872734 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:17:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:44.872792 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:17:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:46.217079 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:17:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:46.217415861Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:46.217480639Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:17:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:46.223212570Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/0ae2aa3a-0b6a-4816-bb08-e27eb213d012 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:17:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:46.223238921Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:17:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:47.217151 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:17:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:47.217572679Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:47.217649263Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:17:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:47.223133536Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/3c7da312-ecbc-403e-9032-4c238937ceac Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:17:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:47.223161869Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:17:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:49.216346 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:17:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:49.216446 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:17:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:49.216698922Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:49.216743566Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:17:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:49.216810277Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:17:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:49.216747563Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:17:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:49.223589929Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1d687544-3fc9-4b1b-9100-4e33c2ed13b1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:17:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:49.223641809Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:17:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:49.226325349Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/11498e30-43ab-4d6e-8d4f-df282eff6474 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:17:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:17:49.226441969Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:17:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:54.872556 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:17:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:17:54.872610 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:56.291780 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:56.292062 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:56.292322 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:17:56.292360 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:18:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:04.872081 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:18:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:04.872150 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:18:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:04.872184 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:18:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:04.872803 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:18:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:04.873000 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" gracePeriod=30 Feb 23 18:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:04.873232974Z" level=info msg="Stopping container: d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0 (timeout: 30s)" id=52163f53-994e-4e7e-bc93-5e1b9ccbb44c name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:18:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:08.633068952Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=52163f53-994e-4e7e-bc93-5e1b9ccbb44c name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:18:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8cb3312dd5e1df5726aa6a6e96f3122b18bbf93abdb6bc63883cd3ed296e5f56-merged.mount: Deactivated successfully. Feb 23 18:18:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:12.413117504Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=52163f53-994e-4e7e-bc93-5e1b9ccbb44c name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:18:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:12.416327498Z" level=info msg="Stopped container d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=52163f53-994e-4e7e-bc93-5e1b9ccbb44c name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:18:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:12.416883 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:18:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:12.949153641Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=38cfd8b9-20b0-4e91-a36c-e21db3e35022 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:15.267042590Z" level=info msg="NetworkStart: stopping network for sandbox 0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:15.267158045Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ea4d2483-ae75-4a00-beb0-3a92ac6f4d63 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:15.267184959Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:15.267195037Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:15.267204771Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:18:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:16.697914116Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=a2363bd5-6780-4bc4-8095-db7161079234 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:18:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:16.698856 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" exitCode=-1 Feb 23 18:18:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:16.698889 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0} Feb 23 18:18:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:16.698927 2199 scope.go:115] "RemoveContainer" containerID="1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69" Feb 23 18:18:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:17.700634 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:18:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:17.701008 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:18:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:20.447950603Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=46de9eb2-7cf2-4c9e-b235-4bbf86ea3ac4 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:18:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:24.210018548Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=18df3e25-6c09-4308-b259-b179ee3b3837 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:18:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:24.210623505Z" level=info msg="Removing container: 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69" id=46a551d3-2cf8-4115-b051-b5a7d3c1b3b4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:26.292189 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:26.292473 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:26.292691 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:26.292738 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:18:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:27.958951197Z" level=warning msg="Failed to find container exit file for 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: timed out waiting for the condition" id=46a551d3-2cf8-4115-b051-b5a7d3c1b3b4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:18:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:27.970988688Z" level=info msg="Removed container 1ce174eb052659672c70bc49c395d82266f93a6197cc6a017cc8742830d9ac69: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=46a551d3-2cf8-4115-b051-b5a7d3c1b3b4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:18:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:29.217168 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:18:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:29.217587 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:31.234763473Z" level=info msg="NetworkStart: stopping network for sandbox 612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:31.234881474Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/0ae2aa3a-0b6a-4816-bb08-e27eb213d012 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:31.234912411Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:31.234922647Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:31.234932091Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:18:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:32.235334678Z" level=info msg="NetworkStart: stopping network for sandbox c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:18:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:32.235442237Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/3c7da312-ecbc-403e-9032-4c238937ceac Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:18:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:32.235475541Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:18:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:32.235483588Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:18:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:32.235490577Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:18:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:32.506327131Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=cfcf4484-b8d3-4361-a554-a9fd491f17ed name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.238018715Z" level=info msg="NetworkStart: stopping network for sandbox 06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.238130606Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1d687544-3fc9-4b1b-9100-4e33c2ed13b1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.238157565Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.238164282Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.238170864Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.239855511Z" level=info msg="NetworkStart: stopping network for sandbox 379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.239950107Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/11498e30-43ab-4d6e-8d4f-df282eff6474 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.239985628Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.239997857Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:18:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:18:34.240008241Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:18:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:42.216534 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:18:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:42.217090 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:18:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:18:54.217034 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:18:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:54.217465 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:56.292115 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:56.292341 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:56.292570 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:56.292597 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:18:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:58.217512 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:58.218222 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:58.218537 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:18:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:18:58.218576 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.276708583Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.276761718Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 systemd[1]: run-utsns-ea4d2483\x2dae75\x2d4a00\x2dbeb0\x2d3a92ac6f4d63.mount: Deactivated successfully. Feb 23 18:19:00 ip-10-0-136-68 systemd[1]: run-ipcns-ea4d2483\x2dae75\x2d4a00\x2dbeb0\x2d3a92ac6f4d63.mount: Deactivated successfully. Feb 23 18:19:00 ip-10-0-136-68 systemd[1]: run-netns-ea4d2483\x2dae75\x2d4a00\x2dbeb0\x2d3a92ac6f4d63.mount: Deactivated successfully. Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.304325946Z" level=info msg="runSandbox: deleting pod ID 0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee from idIndex" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.304359348Z" level=info msg="runSandbox: removing pod sandbox 0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.304392960Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.304418830Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee-userdata-shm.mount: Deactivated successfully. Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.310299320Z" level=info msg="runSandbox: removing pod sandbox from storage: 0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.311834977Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:00.311862017Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f53ee54c-c4fb-4230-a655-bfa9c21a89d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:00.312038 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:19:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:00.312088 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:19:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:00.312110 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:19:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:00.312161 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0ed794c65ef73d303a09d8c86d49e70eda3f35894c75299664000a1708b8e6ee): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:19:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:09.217095 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:19:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:09.217681 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:19:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:12.216710 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:19:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:12.217144077Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:12.217219732Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:19:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:12.222978242Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/fd72aa48-dee3-45cb-aa3e-c8994d92b950 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:19:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:12.223004633Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.245129372Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.245175426Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 systemd[1]: run-utsns-0ae2aa3a\x2d0b6a\x2d4816\x2dbb08\x2de27eb213d012.mount: Deactivated successfully. Feb 23 18:19:16 ip-10-0-136-68 systemd[1]: run-ipcns-0ae2aa3a\x2d0b6a\x2d4816\x2dbb08\x2de27eb213d012.mount: Deactivated successfully. Feb 23 18:19:16 ip-10-0-136-68 systemd[1]: run-netns-0ae2aa3a\x2d0b6a\x2d4816\x2dbb08\x2de27eb213d012.mount: Deactivated successfully. Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.268323259Z" level=info msg="runSandbox: deleting pod ID 612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881 from idIndex" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.268360226Z" level=info msg="runSandbox: removing pod sandbox 612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.268386066Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.268399994Z" level=info msg="runSandbox: unmounting shmPath for sandbox 612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881-userdata-shm.mount: Deactivated successfully. Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.275306197Z" level=info msg="runSandbox: removing pod sandbox from storage: 612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.276888261Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:16.276922856Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=25662df8-b2cc-446e-88a6-b92f322c0c37 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:16.277149 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:19:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:16.277221 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:19:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:16.277275 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:19:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:16.277349 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(612a743ebaab14aecd71453cdb2199381520d329d5180f178141e22468852881): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.245694491Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.245751282Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 systemd[1]: run-utsns-3c7da312\x2decbc\x2d403e\x2d9032\x2d4c238937ceac.mount: Deactivated successfully. Feb 23 18:19:17 ip-10-0-136-68 systemd[1]: run-ipcns-3c7da312\x2decbc\x2d403e\x2d9032\x2d4c238937ceac.mount: Deactivated successfully. Feb 23 18:19:17 ip-10-0-136-68 systemd[1]: run-netns-3c7da312\x2decbc\x2d403e\x2d9032\x2d4c238937ceac.mount: Deactivated successfully. Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.263321259Z" level=info msg="runSandbox: deleting pod ID c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482 from idIndex" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.263357750Z" level=info msg="runSandbox: removing pod sandbox c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.263386682Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.263399492Z" level=info msg="runSandbox: unmounting shmPath for sandbox c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482-userdata-shm.mount: Deactivated successfully. Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.267296380Z" level=info msg="runSandbox: removing pod sandbox from storage: c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.268797846Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:17.268827238Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=35d4d1c8-1f3b-4440-bbfb-f7d3eadef322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:17.269052 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:19:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:17.269130 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:19:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:17.269168 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:19:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:17.269278 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c78d041405f5d458a90a9f8efc6971d35d63cbaf6149bdf0ed94922296046482): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.248144100Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.248205425Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.248990920Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.249164084Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 systemd[1]: run-utsns-1d687544\x2d3fc9\x2d4b1b\x2d9100\x2d4e33c2ed13b1.mount: Deactivated successfully. Feb 23 18:19:19 ip-10-0-136-68 systemd[1]: run-utsns-11498e30\x2d43ab\x2d4d6e\x2d8d4f\x2ddf282eff6474.mount: Deactivated successfully. Feb 23 18:19:19 ip-10-0-136-68 systemd[1]: run-ipcns-1d687544\x2d3fc9\x2d4b1b\x2d9100\x2d4e33c2ed13b1.mount: Deactivated successfully. Feb 23 18:19:19 ip-10-0-136-68 systemd[1]: run-ipcns-11498e30\x2d43ab\x2d4d6e\x2d8d4f\x2ddf282eff6474.mount: Deactivated successfully. Feb 23 18:19:19 ip-10-0-136-68 systemd[1]: run-netns-1d687544\x2d3fc9\x2d4b1b\x2d9100\x2d4e33c2ed13b1.mount: Deactivated successfully. Feb 23 18:19:19 ip-10-0-136-68 systemd[1]: run-netns-11498e30\x2d43ab\x2d4d6e\x2d8d4f\x2ddf282eff6474.mount: Deactivated successfully. Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.263326703Z" level=info msg="runSandbox: deleting pod ID 06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c from idIndex" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.263368601Z" level=info msg="runSandbox: removing pod sandbox 06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.263401969Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.263422389Z" level=info msg="runSandbox: unmounting shmPath for sandbox 06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.270306785Z" level=info msg="runSandbox: removing pod sandbox from storage: 06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.271328316Z" level=info msg="runSandbox: deleting pod ID 379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0 from idIndex" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.271369114Z" level=info msg="runSandbox: removing pod sandbox 379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.271400500Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.271421463Z" level=info msg="runSandbox: unmounting shmPath for sandbox 379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.271887311Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.271915382Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=7de0a707-fc33-4c56-a348-9fa8c742dfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:19.272150 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:19:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:19.272213 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:19:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:19.272237 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:19:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:19.272333 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.278310729Z" level=info msg="runSandbox: removing pod sandbox from storage: 379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.279658824Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:19.279685780Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=3e50252d-ea64-400e-b974-a41ff0fea309 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:19.279901 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:19:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:19.279963 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:19:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:19.280012 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:19:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:19.280091 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:19:20 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-379d36608714589913c266cafcca2ac996b125b6727bd436be5364329e07acf0-userdata-shm.mount: Deactivated successfully. Feb 23 18:19:20 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-06737a79d08b4c777de18244863964c17749d5a06a8dfabcbbd06c12ea95964c-userdata-shm.mount: Deactivated successfully. Feb 23 18:19:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:23.217028 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:19:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:23.217565 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:26.291997 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:26.292229 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:26.292474 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:26.292502 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:19:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:27.217011 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:27.217426202Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:27.217493302Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:27.222929058Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/d44049a5-a473-4063-bf71-91647225579f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:27.222955723Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:19:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:32.216901 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:19:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:32.217352613Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:32.217422737Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:19:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:32.223080742Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/9fddec73-d234-45d8-8997-684bc5221a08 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:19:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:32.223104114Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:19:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:33.216780 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:19:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:33.216780 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:33.217231529Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:33.217310054Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:33.217318331Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:33.217390459Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:33.224888060Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/159d6be7-a672-4d02-8065-205051ea6497 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:33.224914227Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:33.225217416Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/b87f07a2-e7aa-4ccc-8775-931768a0b1e6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:33.225367934Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:19:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:36.216798 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:19:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:36.217179 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:19:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:19:48.217127 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:19:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:48.217728 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:56.292569 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:56.292760 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:56.292954 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:19:56.292977 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:57.235326516Z" level=info msg="NetworkStart: stopping network for sandbox 9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:57.235447681Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/fd72aa48-dee3-45cb-aa3e-c8994d92b950 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:57.235474811Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:57.235483367Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:19:57.235489793Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:20:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:20:00.217455 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:20:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:00.217846 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:20:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:12.234787923Z" level=info msg="NetworkStart: stopping network for sandbox f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:12.234915739Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/d44049a5-a473-4063-bf71-91647225579f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:20:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:12.234955251Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:20:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:12.234966173Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:20:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:12.234975237Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:20:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:20:13.217412 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:20:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:13.217814 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:20:15 ip-10-0-136-68 NetworkManager[1177]: [1677176415.0099] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 18:20:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:17.234924687Z" level=info msg="NetworkStart: stopping network for sandbox 8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:17.235297121Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/9fddec73-d234-45d8-8997-684bc5221a08 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:20:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:17.235328246Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:20:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:17.235335644Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:20:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:17.235343064Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.239984550Z" level=info msg="NetworkStart: stopping network for sandbox 0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240040912Z" level=info msg="NetworkStart: stopping network for sandbox 3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240105948Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/159d6be7-a672-4d02-8065-205051ea6497 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240145182Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240155881Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240163189Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240109359Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/b87f07a2-e7aa-4ccc-8775-931768a0b1e6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240285631Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240298504Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:18.240309935Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:20:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:20.185542336Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=68bc8c77-b120-4637-9b65-dbd2597c373a name=/runtime.v1.ImageService/ImageStatus Feb 23 18:20:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:20.185748688Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=68bc8c77-b120-4637-9b65-dbd2597c373a name=/runtime.v1.ImageService/ImageStatus Feb 23 18:20:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:20:25.217274 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:20:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:25.217717 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:26.292633 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:26.292867 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:26.293179 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:26.293206 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:20:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:27.217528 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:27.217806 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:27.218060 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:27.218106 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:20:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:20:38.217425 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:20:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:38.217865 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.245463821Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.245518586Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 systemd[1]: run-utsns-fd72aa48\x2ddee3\x2d45cb\x2daa3e\x2dc8994d92b950.mount: Deactivated successfully. Feb 23 18:20:42 ip-10-0-136-68 systemd[1]: run-ipcns-fd72aa48\x2ddee3\x2d45cb\x2daa3e\x2dc8994d92b950.mount: Deactivated successfully. Feb 23 18:20:42 ip-10-0-136-68 systemd[1]: run-netns-fd72aa48\x2ddee3\x2d45cb\x2daa3e\x2dc8994d92b950.mount: Deactivated successfully. Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.268330094Z" level=info msg="runSandbox: deleting pod ID 9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732 from idIndex" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.268366215Z" level=info msg="runSandbox: removing pod sandbox 9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.268393517Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.268408368Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732-userdata-shm.mount: Deactivated successfully. Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.276308421Z" level=info msg="runSandbox: removing pod sandbox from storage: 9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.278035624Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:42.278074231Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=6339369a-7ba8-4418-a229-b6519db9ec11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:42.278307 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:20:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:42.278362 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:20:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:42.278384 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:20:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:42.278446 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9d28fb5efcfb14679e5cfe3e137a8e64d2e703a69987fab2b638de20a0214732): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:20:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:20:53.216824 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:20:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:53.217222 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:20:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:20:55.217054 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:20:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:55.217456614Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:55.217522716Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:20:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:55.222587129Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/73593ee6-43a8-425a-ba09-67985f469cb4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:20:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:55.222610486Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:56.291966 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:56.292202 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:56.292480 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:56.292503 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.245076150Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.245129813Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 systemd[1]: run-utsns-d44049a5\x2da473\x2d4063\x2dbf71\x2d91647225579f.mount: Deactivated successfully. Feb 23 18:20:57 ip-10-0-136-68 systemd[1]: run-ipcns-d44049a5\x2da473\x2d4063\x2dbf71\x2d91647225579f.mount: Deactivated successfully. Feb 23 18:20:57 ip-10-0-136-68 systemd[1]: run-netns-d44049a5\x2da473\x2d4063\x2dbf71\x2d91647225579f.mount: Deactivated successfully. Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.263330960Z" level=info msg="runSandbox: deleting pod ID f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c from idIndex" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.263374169Z" level=info msg="runSandbox: removing pod sandbox f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.263427689Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.263450275Z" level=info msg="runSandbox: unmounting shmPath for sandbox f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c-userdata-shm.mount: Deactivated successfully. Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.274305854Z" level=info msg="runSandbox: removing pod sandbox from storage: f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.275759648Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:20:57.275790938Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=fad4d640-44f7-4b18-8ca5-dfeb0e784d7f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:20:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:57.276020 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:20:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:57.276073 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:20:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:57.276098 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:20:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:20:57.276155 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f29484a6f3dac408e91b8a01cf55a7e62ef1d9571d641f9da62e1cefb0f0440c): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.245530694Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.245584318Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 systemd[1]: run-utsns-9fddec73\x2dd234\x2d45d8\x2d8997\x2d684bc5221a08.mount: Deactivated successfully. Feb 23 18:21:02 ip-10-0-136-68 systemd[1]: run-ipcns-9fddec73\x2dd234\x2d45d8\x2d8997\x2d684bc5221a08.mount: Deactivated successfully. Feb 23 18:21:02 ip-10-0-136-68 systemd[1]: run-netns-9fddec73\x2dd234\x2d45d8\x2d8997\x2d684bc5221a08.mount: Deactivated successfully. Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.262313951Z" level=info msg="runSandbox: deleting pod ID 8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702 from idIndex" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.262347065Z" level=info msg="runSandbox: removing pod sandbox 8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.262372918Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.262388495Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702-userdata-shm.mount: Deactivated successfully. Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.269302169Z" level=info msg="runSandbox: removing pod sandbox from storage: 8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.270832400Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:02.270860712Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=e88c32f9-e153-4f0c-bfaa-368d3bd9d49b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:02.271059 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:21:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:02.271110 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:21:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:02.271139 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:21:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:02.271199 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8174ad8738a7842d04d6e4df223563656e5715e1564c3734923b6dea7184c702): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.249223887Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.249329455Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.250443840Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.250485439Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 systemd[1]: run-utsns-b87f07a2\x2de7aa\x2d4ccc\x2d8775\x2d931768a0b1e6.mount: Deactivated successfully. Feb 23 18:21:03 ip-10-0-136-68 systemd[1]: run-utsns-159d6be7\x2da672\x2d4d02\x2d8065\x2d205051ea6497.mount: Deactivated successfully. Feb 23 18:21:03 ip-10-0-136-68 systemd[1]: run-ipcns-b87f07a2\x2de7aa\x2d4ccc\x2d8775\x2d931768a0b1e6.mount: Deactivated successfully. Feb 23 18:21:03 ip-10-0-136-68 systemd[1]: run-ipcns-159d6be7\x2da672\x2d4d02\x2d8065\x2d205051ea6497.mount: Deactivated successfully. Feb 23 18:21:03 ip-10-0-136-68 systemd[1]: run-netns-b87f07a2\x2de7aa\x2d4ccc\x2d8775\x2d931768a0b1e6.mount: Deactivated successfully. Feb 23 18:21:03 ip-10-0-136-68 systemd[1]: run-netns-159d6be7\x2da672\x2d4d02\x2d8065\x2d205051ea6497.mount: Deactivated successfully. Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.277331255Z" level=info msg="runSandbox: deleting pod ID 3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5 from idIndex" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.277370945Z" level=info msg="runSandbox: removing pod sandbox 3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.277330006Z" level=info msg="runSandbox: deleting pod ID 0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27 from idIndex" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.277418977Z" level=info msg="runSandbox: removing pod sandbox 0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.277447509Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.277467900Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.277465704Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.277527791Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.282303796Z" level=info msg="runSandbox: removing pod sandbox from storage: 3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.282307504Z" level=info msg="runSandbox: removing pod sandbox from storage: 0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.283769760Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.283805487Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=83116bb8-f727-408c-9112-bea834779d92 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:03.284101 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:21:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:03.284264 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:21:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:03.284292 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:21:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:03.284356 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.285202019Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:03.285235876Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=ef461233-8fd3-45f9-87c9-d474cb8c6137 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:03.285456 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:21:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:03.285502 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:21:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:03.285526 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:21:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:03.285576 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:21:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:04.216997 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:21:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:04.217500 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:21:04 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3fd45bde518d91f847e722a6943f8581357a42e87ddd09147d5cff8b760d51a5-userdata-shm.mount: Deactivated successfully. Feb 23 18:21:04 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0ef2cf78aa498f7ede4915ae0c573088798abe6141a5d2f82bbfa3a3ac0e7a27-userdata-shm.mount: Deactivated successfully. Feb 23 18:21:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:10.217637 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:10.218050935Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:10.218120283Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:10.223694206Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/3ae304a3-4676-480e-83b5-7d718e7bb611 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:10.223718081Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:21:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:14.216642 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:21:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:14.217066969Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:14.217130784Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:21:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:14.222932637Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/6869e245-753c-4ec2-af1d-58c2b67e75a8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:21:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:14.222956908Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:21:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:17.216453 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:21:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:17.216573 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:21:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:17.216873146Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:17.216942362Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:21:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:17.217011 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:21:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:17.222212480Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5f97988d-d18f-4372-b74d-13dbfd73e7d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:21:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:17.222265215Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:21:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:18.216608 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:18.217174827Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:18.217274428Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:18.222990909Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/9a2ad924-b1e0-402a-94a9-cabc2a7585b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:18.223024883Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:21:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:26.292131 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:26.292376 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:26.292642 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:26.292682 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:21:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:28.216565 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:21:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:28.216955 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:21:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:36.217413 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:36.217799 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:36.218051 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:36.218113 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:21:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:39.217375 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:21:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:39.217934 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:21:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:40.234932227Z" level=info msg="NetworkStart: stopping network for sandbox 5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:40.235047655Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/73593ee6-43a8-425a-ba09-67985f469cb4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:21:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:40.235075774Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:21:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:40.235084793Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:21:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:40.235094999Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:21:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:21:52.216722 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:21:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:52.217293 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:21:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:55.235893104Z" level=info msg="NetworkStart: stopping network for sandbox 37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:55.236025764Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/3ae304a3-4676-480e-83b5-7d718e7bb611 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:21:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:55.236065755Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:21:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:55.236081694Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:21:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:55.236091752Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:21:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:56.292184 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:56.292478 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:56.292720 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:21:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:21:56.292751 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:21:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:59.235280839Z" level=info msg="NetworkStart: stopping network for sandbox 2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:21:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:59.235403623Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/6869e245-753c-4ec2-af1d-58c2b67e75a8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:21:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:59.235431470Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:21:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:59.235442746Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:21:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:21:59.235453033Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:02.234671160Z" level=info msg="NetworkStart: stopping network for sandbox e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:02.234798377Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5f97988d-d18f-4372-b74d-13dbfd73e7d5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:02.234840947Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:02.234855611Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:02.234865703Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:22:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:03.217134 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:22:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:03.217728 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:22:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:03.235053224Z" level=info msg="NetworkStart: stopping network for sandbox 1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:03.235196514Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/9a2ad924-b1e0-402a-94a9-cabc2a7585b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:22:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:03.235234610Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:22:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:03.235271422Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:22:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:03.235282865Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:22:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:16.217042 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:22:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:16.217649 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.244659670Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.244714842Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 systemd[1]: run-utsns-73593ee6\x2d43a8\x2d425a\x2dba09\x2d67985f469cb4.mount: Deactivated successfully. Feb 23 18:22:25 ip-10-0-136-68 systemd[1]: run-ipcns-73593ee6\x2d43a8\x2d425a\x2dba09\x2d67985f469cb4.mount: Deactivated successfully. Feb 23 18:22:25 ip-10-0-136-68 systemd[1]: run-netns-73593ee6\x2d43a8\x2d425a\x2dba09\x2d67985f469cb4.mount: Deactivated successfully. Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.267333344Z" level=info msg="runSandbox: deleting pod ID 5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a from idIndex" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.267369007Z" level=info msg="runSandbox: removing pod sandbox 5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.267396570Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.267408618Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a-userdata-shm.mount: Deactivated successfully. Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.274321569Z" level=info msg="runSandbox: removing pod sandbox from storage: 5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.275958660Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:25.275993367Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=9d9e2a59-c310-499c-8936-ba1f2c057acc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:25.276215 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:22:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:25.276333 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:22:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:25.276357 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:22:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:25.276411 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5ad719fb367b7a60acf8f4deae2d796ce55de9a7b01ee08cb38348c14b29261a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:26.291986 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:26.292299 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:26.292551 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:26.292580 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:22:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:27.216737 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:22:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:27.217185 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:22:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:38.216999 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:22:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:38.217434503Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:38.217488055Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:22:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:38.223174602Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/16aff7cd-a0ec-4a79-9bcf-056ce1047476 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:22:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:38.223210760Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.245949943Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.246002349Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 systemd[1]: run-utsns-3ae304a3\x2d4676\x2d480e\x2d83b5\x2d7d718e7bb611.mount: Deactivated successfully. Feb 23 18:22:40 ip-10-0-136-68 systemd[1]: run-ipcns-3ae304a3\x2d4676\x2d480e\x2d83b5\x2d7d718e7bb611.mount: Deactivated successfully. Feb 23 18:22:40 ip-10-0-136-68 systemd[1]: run-netns-3ae304a3\x2d4676\x2d480e\x2d83b5\x2d7d718e7bb611.mount: Deactivated successfully. Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.267322452Z" level=info msg="runSandbox: deleting pod ID 37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec from idIndex" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.267362112Z" level=info msg="runSandbox: removing pod sandbox 37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.267391406Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.267409336Z" level=info msg="runSandbox: unmounting shmPath for sandbox 37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec-userdata-shm.mount: Deactivated successfully. Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.272329267Z" level=info msg="runSandbox: removing pod sandbox from storage: 37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.273859322Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:40.273893088Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=287c8762-c8ab-4613-9c05-78dbff9ef78b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:40.274090 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:22:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:40.274142 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:22:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:40.274170 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:22:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:40.274223 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(37d9ada286d3e4057d9ee0e06cdfa921bfeb9f1616a42c0e269ec4c7ba5f9dec): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.046531 2199 kubelet.go:2219] "SyncLoop ADD" source="api" pods="[openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug]" Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.046571 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-besteffort-podfaee5572_050c_4fe2_b0a1_1aa9ae48ce75.slice. Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.060700 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tdn6\" (UniqueName: \"kubernetes.io/projected/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-kube-api-access-7tdn6\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\") " pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.060744 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\") " pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.161325 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\") " pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.161389 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-7tdn6\" (UniqueName: \"kubernetes.io/projected/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-kube-api-access-7tdn6\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\") " pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.161460 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\") " pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.180037 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tdn6\" (UniqueName: \"kubernetes.io/projected/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-kube-api-access-7tdn6\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\") " pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.217501 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:42.217909 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.362988 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.363671382Z" level=info msg="Running pod sandbox: openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug/POD" id=90545369-790c-41cf-b5d8-388f80f65d15 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.363737905Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.367340947Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=90545369-790c-41cf-b5d8-388f80f65d15 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.369413506Z" level=info msg="Ran pod sandbox 7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1 with infra container: openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug/POD" id=90545369-790c-41cf-b5d8-388f80f65d15 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.370263694Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12" id=0b2a26de-9def-4de2-be8f-d9a95ef319f6 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.370454004Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:25563a58e011c8f5e5ce0ad0855a11a739335cfafef29c46935ce1be3de8dd03,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12],Size_:792105820,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=0b2a26de-9def-4de2-be8f-d9a95ef319f6 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.371053249Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12" id=1ef70165-a24a-42bc-af74-f6f1acfa623e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.371217442Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:25563a58e011c8f5e5ce0ad0855a11a739335cfafef29c46935ce1be3de8dd03,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12],Size_:792105820,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=1ef70165-a24a-42bc-af74-f6f1acfa623e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.371804248Z" level=info msg="Creating container: openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=113376a6-e73b-4ce0-8e8c-842e153fbce9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.371904056Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: Started crio-conmon-c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa.scope. Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: Started libcontainer container c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa. Feb 23 18:22:42 ip-10-0-136-68 conmon[8022]: conmon c8b13879970264fd3c55 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: crio-conmon-c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa.scope: Deactivated successfully. Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.454581976Z" level=info msg="Created container c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=113376a6-e73b-4ce0-8e8c-842e153fbce9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.454873226Z" level=info msg="Starting container: c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa" id=a48e4529-bfe7-4cd8-8a73-369cd0a6af8f name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:42.475297370Z" level=info msg="Started container" PID=8033 containerID=c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa description=openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug/container-00 id=a48e4529-bfe7-4cd8-8a73-369cd0a6af8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1 Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 18:22:42 ip-10-0-136-68 rpm-ostree[8063]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: Starting Authorization Manager... Feb 23 18:22:42 ip-10-0-136-68 polkitd[8067]: Started polkitd version 0.117 Feb 23 18:22:42 ip-10-0-136-68 polkitd[8067]: Loading rules from directory /etc/polkit-1/rules.d Feb 23 18:22:42 ip-10-0-136-68 polkitd[8067]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 23 18:22:42 ip-10-0-136-68 polkitd[8067]: Finished loading, compiling and executing 3 rules Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: Started Authorization Manager. Feb 23 18:22:42 ip-10-0-136-68 polkitd[8067]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 23 18:22:42 ip-10-0-136-68 rpm-ostree[8063]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 18:22:42 ip-10-0-136-68 rpm-ostree[8063]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 18:22:42 ip-10-0-136-68 rpm-ostree[8063]: In idle state; will auto-exit in 64 seconds Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 18:22:42 ip-10-0-136-68 rpm-ostree[8063]: client(id:cli dbus:1.209 unit:crio-c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa.scope uid:0) added; new total=1 Feb 23 18:22:42 ip-10-0-136-68 rpm-ostree[8063]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 18:22:42 ip-10-0-136-68 rpm-ostree[8063]: client(id:cli dbus:1.209 unit:crio-c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa.scope uid:0) vanished; remaining=0 Feb 23 18:22:42 ip-10-0-136-68 rpm-ostree[8063]: In idle state; will auto-exit in 60 seconds Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.900448 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:faee5572-050c-4fe2-b0a1-1aa9ae48ce75 Type:ContainerStarted Data:c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa} Feb 23 18:22:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:42.900635 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:faee5572-050c-4fe2-b0a1-1aa9ae48ce75 Type:ContainerStarted Data:7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1} Feb 23 18:22:42 ip-10-0-136-68 systemd[1]: crio-c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa.scope: Deactivated successfully. Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.244548139Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.244755175Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 systemd[1]: run-utsns-6869e245\x2d753c\x2d4ec2\x2daf1d\x2d58c2b67e75a8.mount: Deactivated successfully. Feb 23 18:22:44 ip-10-0-136-68 systemd[1]: run-ipcns-6869e245\x2d753c\x2d4ec2\x2daf1d\x2d58c2b67e75a8.mount: Deactivated successfully. Feb 23 18:22:44 ip-10-0-136-68 systemd[1]: run-netns-6869e245\x2d753c\x2d4ec2\x2daf1d\x2d58c2b67e75a8.mount: Deactivated successfully. Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.267339390Z" level=info msg="runSandbox: deleting pod ID 2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d from idIndex" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.267378601Z" level=info msg="runSandbox: removing pod sandbox 2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.267423861Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.267445468Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d-userdata-shm.mount: Deactivated successfully. Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.274320372Z" level=info msg="runSandbox: removing pod sandbox from storage: 2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.276289811Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:44.276328325Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=204d9b91-a952-432f-bec3-cdbe0ed711a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:44.276513 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:22:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:44.276562 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:22:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:44.276589 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:22:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:44.276648 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2b8268de643a6b7b824aa0ae34f62bf75246fc961e5c342d0b673381c55c896d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.245004340Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.245063167Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 systemd[1]: run-utsns-5f97988d\x2dd18f\x2d4372\x2db74d\x2d13dbfd73e7d5.mount: Deactivated successfully. Feb 23 18:22:47 ip-10-0-136-68 systemd[1]: run-ipcns-5f97988d\x2dd18f\x2d4372\x2db74d\x2d13dbfd73e7d5.mount: Deactivated successfully. Feb 23 18:22:47 ip-10-0-136-68 systemd[1]: run-netns-5f97988d\x2dd18f\x2d4372\x2db74d\x2d13dbfd73e7d5.mount: Deactivated successfully. Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.277351001Z" level=info msg="runSandbox: deleting pod ID e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc from idIndex" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.277398069Z" level=info msg="runSandbox: removing pod sandbox e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.277451961Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.277471889Z" level=info msg="runSandbox: unmounting shmPath for sandbox e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc-userdata-shm.mount: Deactivated successfully. Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.283311809Z" level=info msg="runSandbox: removing pod sandbox from storage: e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.284904723Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:47.284935386Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=1c277db4-a611-4b7e-84b6-396153260505 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:47.285147 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:22:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:47.285222 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:22:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:47.285281 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:22:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:47.285373 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e48d52a2aa7e4c857fe8015dbf7167892fce14440b93d649a9d3cabc343155dc): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.245353480Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.245582247Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 systemd[1]: run-utsns-9a2ad924\x2db1e0\x2d402a\x2d94a9\x2dcabc2a7585b6.mount: Deactivated successfully. Feb 23 18:22:48 ip-10-0-136-68 systemd[1]: run-ipcns-9a2ad924\x2db1e0\x2d402a\x2d94a9\x2dcabc2a7585b6.mount: Deactivated successfully. Feb 23 18:22:48 ip-10-0-136-68 systemd[1]: run-netns-9a2ad924\x2db1e0\x2d402a\x2d94a9\x2dcabc2a7585b6.mount: Deactivated successfully. Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.280349163Z" level=info msg="runSandbox: deleting pod ID 1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc from idIndex" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.280395791Z" level=info msg="runSandbox: removing pod sandbox 1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.280441887Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.280455581Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc-userdata-shm.mount: Deactivated successfully. Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.284310960Z" level=info msg="runSandbox: removing pod sandbox from storage: 1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.285978823Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:48.286012493Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=e75924c2-d0b0-4b22-bc1f-0011b929f686 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:48.286280 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:22:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:48.286352 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:22:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:48.286392 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:22:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:48.286472 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1a1c2d50669e58e06482a247c15c4c1b8a8e54c693e75dfa59c7ed916c1526dc): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:22:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:52.217031 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:52.217464565Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:52.217517727Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:52.226622676Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/88600832-8ab8-4c4b-b090-7019be95ca84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:52.226653743Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:22:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:56.216438 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:22:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:56.216505 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:22:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:56.216853121Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:56.216916626Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:56.217038 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:22:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:56.222995458Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/366488da-5b17-4206-a537-221c623e29c5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:22:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:56.223032963Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:56.292654 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:56.292903 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:56.293131 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:22:56.293167 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:22:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:58.676657390Z" level=info msg="cleanup sandbox network" Feb 23 18:22:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:59.216957 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:22:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:22:59.217050 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:22:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:59.217418055Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:59.217473364Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:22:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:59.217506443Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:22:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:59.217518269Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:22:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:59.224700161Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/44bc27fe-7792-49e0-9733-0727894ba970 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:22:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:59.224727149Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:22:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:59.225268059Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/233f384e-7f99-423e-a65b-c6a971abbf5a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:22:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:22:59.225300820Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:23:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:03.216797 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:03.217132 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:03.217386 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:03.217433 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:23:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:09.216823 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.217597394Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=97f13065-8d3f-4fd0-87b2-29242b4188d2 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.217819620Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=97f13065-8d3f-4fd0-87b2-29242b4188d2 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.218414306Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=6b89504d-d1b2-4a37-ab05-61cb9cecc4bf name=/runtime.v1.ImageService/ImageStatus Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.218580653Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6b89504d-d1b2-4a37-ab05-61cb9cecc4bf name=/runtime.v1.ImageService/ImageStatus Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.219200373Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=fe672f7f-7b6b-4659-9262-5635e68b9411 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.219318356Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:23:09 ip-10-0-136-68 systemd[1]: Started crio-conmon-e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e.scope. Feb 23 18:23:09 ip-10-0-136-68 systemd[1]: Started libcontainer container e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e. Feb 23 18:23:09 ip-10-0-136-68 conmon[8131]: conmon e34dd1ed63fc3d49653c : Failed to write to cgroup.event_control Operation not supported Feb 23 18:23:09 ip-10-0-136-68 systemd[1]: crio-conmon-e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e.scope: Deactivated successfully. Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.347137761Z" level=info msg="Created container e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=fe672f7f-7b6b-4659-9262-5635e68b9411 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.347561474Z" level=info msg="Starting container: e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e" id=30a48af2-676e-4578-b546-87e102b86391 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:23:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:09.354629949Z" level=info msg="Started container" PID=8142 containerID=e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=30a48af2-676e-4578-b546-87e102b86391 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:23:09 ip-10-0-136-68 systemd[1]: crio-e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e.scope: Deactivated successfully. Feb 23 18:23:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:13.693139001Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=e1b16984-2dd2-48fd-8242-da78f9ae6ba4 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:23:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:13.694303 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e} Feb 23 18:23:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:13.708227 2199 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" podStartSLOduration=31.708185018 pod.CreationTimestamp="2023-02-23 18:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 18:22:42.912094983 +0000 UTC m=+1943.802714014" watchObservedRunningTime="2023-02-23 18:23:13.708185018 +0000 UTC m=+1974.598804046" Feb 23 18:23:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:23.234760087Z" level=info msg="NetworkStart: stopping network for sandbox 7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:23:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:23.234813057Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:23:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:23.235228298Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:23:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:24.872888 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:23:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:24.872948 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:23:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:26.291896 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:26.292213 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:26.292479 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:26.292519 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:23:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:34.872358 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:23:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:34.872419 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:23:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:37.237976589Z" level=info msg="NetworkStart: stopping network for sandbox 151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:23:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:37.238092363Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/88600832-8ab8-4c4b-b090-7019be95ca84 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:23:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:37.238121045Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:23:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:37.238128753Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:23:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:37.238137094Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:23:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:41.235423504Z" level=info msg="NetworkStart: stopping network for sandbox f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:23:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:41.235543383Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/366488da-5b17-4206-a537-221c623e29c5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:23:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:41.235571493Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:23:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:41.235582018Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:23:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:41.235591698Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:23:43 ip-10-0-136-68 rpm-ostree[8063]: In idle state; will auto-exit in 60 seconds Feb 23 18:23:43 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Deactivated successfully. Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239161846Z" level=info msg="NetworkStart: stopping network for sandbox 6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239204223Z" level=info msg="NetworkStart: stopping network for sandbox 4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239294888Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/44bc27fe-7792-49e0-9733-0727894ba970 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239323731Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239331180Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239337622Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239326675Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/233f384e-7f99-423e-a65b-c6a971abbf5a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239440844Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239451762Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:23:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:44.239461366Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:23:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:44.872765 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:23:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:44.872960 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:23:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:54.872362 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:23:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:23:54.872423 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:23:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:56.292531 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:56.292792 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:56.293045 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:23:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:23:56.293073 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:23:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:58.678106597Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" Feb 23 18:23:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:58.678164726Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/16aff7cd-a0ec-4a79-9bcf-056ce1047476 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:23:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:58.678210354Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:23:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:58.678219320Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:23:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:23:58.678227750Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:04.217653 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:04.217940 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:04.218129 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:04.218161 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:04.872673 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:04.872735 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:04.872763 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:04.873274 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:24:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:04.873433 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e" gracePeriod=30 Feb 23 18:24:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:04.873708500Z" level=info msg="Stopping container: e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e (timeout: 30s)" id=205b8810-597f-4ee5-a82b-f10c04264327 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:24:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:08.635087651Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=205b8810-597f-4ee5-a82b-f10c04264327 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:24:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-5f5aece3a44fe863f5212994f43f0a06308a4a7f5b4d349b92980c8c73293229-merged.mount: Deactivated successfully. Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.438072646Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=205b8810-597f-4ee5-a82b-f10c04264327 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.439698814Z" level=info msg="Stopped container e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=205b8810-597f-4ee5-a82b-f10c04264327 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.440406090Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=fa9d4679-2dca-4f9c-9c99-c8fc55468f97 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.440605035Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=fa9d4679-2dca-4f9c-9c99-c8fc55468f97 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.441189120Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=b84bfe9f-94f5-41a2-9bd1-572f45d06913 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.441347527Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=b84bfe9f-94f5-41a2-9bd1-572f45d06913 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.441995812Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d6f97133-6645-4236-892e-8a79061bb7cf name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.442099364Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:24:12 ip-10-0-136-68 systemd[1]: Started crio-conmon-89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee.scope. Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.518579650Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=3d8f1fef-9e8a-48bc-81bf-e46e69884ade name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:24:12 ip-10-0-136-68 systemd[1]: Started libcontainer container 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee. Feb 23 18:24:12 ip-10-0-136-68 conmon[8303]: conmon 89651d291d2f8f16315f : Failed to write to cgroup.event_control Operation not supported Feb 23 18:24:12 ip-10-0-136-68 systemd[1]: crio-conmon-89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee.scope: Deactivated successfully. Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.574361955Z" level=info msg="Created container 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d6f97133-6645-4236-892e-8a79061bb7cf name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.574753227Z" level=info msg="Starting container: 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" id=b54e01f5-fa42-4492-9597-99fc1c196f75 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:24:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:12.581683653Z" level=info msg="Started container" PID=8327 containerID=89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=b54e01f5-fa42-4492-9597-99fc1c196f75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:24:12 ip-10-0-136-68 systemd[1]: crio-89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee.scope: Deactivated successfully. Feb 23 18:24:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:16.281128131Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=818e1707-96bc-425c-80c9-594e18d4aa80 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:24:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:16.281948 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e" exitCode=-1 Feb 23 18:24:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:16.281980 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e} Feb 23 18:24:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:16.282005 2199 scope.go:115] "RemoveContainer" containerID="d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" Feb 23 18:24:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:20.041987089Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=f54830a9-0d7e-4ee6-89de-b6333f25e8e4 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:24:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:21.024115522Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=d89dc987-b7f9-4d4f-aac0-98e32f23c09e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.248410598Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.248460105Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 systemd[1]: run-utsns-88600832\x2d8ab8\x2d4c4b\x2db090\x2d7019be95ca84.mount: Deactivated successfully. Feb 23 18:24:22 ip-10-0-136-68 systemd[1]: run-ipcns-88600832\x2d8ab8\x2d4c4b\x2db090\x2d7019be95ca84.mount: Deactivated successfully. Feb 23 18:24:22 ip-10-0-136-68 systemd[1]: run-netns-88600832\x2d8ab8\x2d4c4b\x2db090\x2d7019be95ca84.mount: Deactivated successfully. Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.277377966Z" level=info msg="runSandbox: deleting pod ID 151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270 from idIndex" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.277422984Z" level=info msg="runSandbox: removing pod sandbox 151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.277452432Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.277466209Z" level=info msg="runSandbox: unmounting shmPath for sandbox 151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270-userdata-shm.mount: Deactivated successfully. Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.283306382Z" level=info msg="runSandbox: removing pod sandbox from storage: 151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.284973191Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:22.284999859Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=766a3f1f-7902-4479-9581-936075795c0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:22.285188 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:24:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:22.285312 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:24:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:22.285344 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:24:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:22.285401 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(151c186b383d5102aff5a6b2891a99f08b6efecbff56cdabee1925e20c15b270): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.235799969Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.235840727Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 systemd[1]: run-utsns-16aff7cd\x2da0ec\x2d4a79\x2d9bcf\x2d056ce1047476.mount: Deactivated successfully. Feb 23 18:24:23 ip-10-0-136-68 systemd[1]: run-netns-16aff7cd\x2da0ec\x2d4a79\x2d9bcf\x2d056ce1047476.mount: Deactivated successfully. Feb 23 18:24:23 ip-10-0-136-68 systemd[1]: run-ipcns-16aff7cd\x2da0ec\x2d4a79\x2d9bcf\x2d056ce1047476.mount: Deactivated successfully. Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.259325179Z" level=info msg="runSandbox: deleting pod ID 7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd from idIndex" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.259353756Z" level=info msg="runSandbox: removing pod sandbox 7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.259382492Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.259409606Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd-userdata-shm.mount: Deactivated successfully. Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.265288920Z" level=info msg="runSandbox: removing pod sandbox from storage: 7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.266857672Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.266886298Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=e756c758-1244-4394-9784-2dbd8ef60f5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:23.267066 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:24:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:23.267116 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:24:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:23.267139 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:24:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:23.267193 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7baba281777a22d67ec4ddad81c0b505acd1380db202358033fa0161f52581dd): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.790971695Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=09dec659-6a6c-4548-a281-9a84d7cbecde name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.791503301Z" level=info msg="Removing container: d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0" id=99c11b41-a359-4140-9b4a-96b49cd7ba51 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:24:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:23.839618565Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=6a17e459-1f7b-4d1c-aa73-918c3e649261 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:24:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:23.839927 2199 kuberuntime_gc.go:390] "Failed to remove container log dead symlink" err="remove /var/log/containers/aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers_csi-driver-d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0.log: no such file or directory" path="/var/log/containers/aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers_csi-driver-d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0.log" Feb 23 18:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:24.772942347Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=7ed1f54d-99f1-43d2-bdf5-0e5f88fbabf7 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:24:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:24.773881 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee} Feb 23 18:24:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:24.872858 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:24:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:24.872917 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.245445899Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.245490218Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 systemd[1]: run-utsns-366488da\x2d5b17\x2d4206\x2da537\x2d221c623e29c5.mount: Deactivated successfully. Feb 23 18:24:26 ip-10-0-136-68 systemd[1]: run-ipcns-366488da\x2d5b17\x2d4206\x2da537\x2d221c623e29c5.mount: Deactivated successfully. Feb 23 18:24:26 ip-10-0-136-68 systemd[1]: run-netns-366488da\x2d5b17\x2d4206\x2da537\x2d221c623e29c5.mount: Deactivated successfully. Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.265311355Z" level=info msg="runSandbox: deleting pod ID f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed from idIndex" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.265339749Z" level=info msg="runSandbox: removing pod sandbox f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.265364554Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.265379128Z" level=info msg="runSandbox: unmounting shmPath for sandbox f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed-userdata-shm.mount: Deactivated successfully. Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.279299753Z" level=info msg="runSandbox: removing pod sandbox from storage: f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.280969134Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:26.281003605Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=6c2033e4-bc81-425f-8123-943fbcf9ce4e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:26.281203 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:26.281289 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:26.281330 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:26.281410 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f181a48c1aec28bb0325d7000dc9b5e324c70951da3afe9bb2043dd6619124ed): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:26.291778 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:26.292013 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:26.292229 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:26.292282 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:24:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:27.554031287Z" level=warning msg="Failed to find container exit file for d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: timed out waiting for the condition" id=99c11b41-a359-4140-9b4a-96b49cd7ba51 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:24:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:27.578932753Z" level=info msg="Removed container d965241664ee7e50681020fbd7ad0de6a381d202653d3d90dc63043df61474f0: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=99c11b41-a359-4140-9b4a-96b49cd7ba51 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.251699700Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.251758153Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.251759296Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.251828167Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 systemd[1]: run-utsns-233f384e\x2d7f99\x2d423e\x2da65b\x2dc6a971abbf5a.mount: Deactivated successfully. Feb 23 18:24:29 ip-10-0-136-68 systemd[1]: run-utsns-44bc27fe\x2d7792\x2d49e0\x2d9733\x2d0727894ba970.mount: Deactivated successfully. Feb 23 18:24:29 ip-10-0-136-68 systemd[1]: run-ipcns-233f384e\x2d7f99\x2d423e\x2da65b\x2dc6a971abbf5a.mount: Deactivated successfully. Feb 23 18:24:29 ip-10-0-136-68 systemd[1]: run-ipcns-44bc27fe\x2d7792\x2d49e0\x2d9733\x2d0727894ba970.mount: Deactivated successfully. Feb 23 18:24:29 ip-10-0-136-68 systemd[1]: run-netns-233f384e\x2d7f99\x2d423e\x2da65b\x2dc6a971abbf5a.mount: Deactivated successfully. Feb 23 18:24:29 ip-10-0-136-68 systemd[1]: run-netns-44bc27fe\x2d7792\x2d49e0\x2d9733\x2d0727894ba970.mount: Deactivated successfully. Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.275323089Z" level=info msg="runSandbox: deleting pod ID 4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412 from idIndex" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.275359069Z" level=info msg="runSandbox: removing pod sandbox 4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.275398150Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.275417469Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.277325428Z" level=info msg="runSandbox: deleting pod ID 6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6 from idIndex" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.277359798Z" level=info msg="runSandbox: removing pod sandbox 6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.277391633Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.277413916Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.280313414Z" level=info msg="runSandbox: removing pod sandbox from storage: 4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.281950306Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.281983056Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=f522ebd8-c8b7-4630-8cd9-bd7eaa088945 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:29.282190 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:24:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:29.282280 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:24:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:29.282321 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:24:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:29.282411 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.282313767Z" level=info msg="runSandbox: removing pod sandbox from storage: 6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.283747769Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:29.283774005Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=3a34ef32-148d-431a-a78f-5f787a041abb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:29.283928 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:24:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:29.283979 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:24:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:29.284013 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:24:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:29.284082 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:24:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4397619e0b3f5b1f3d75bc1707419550cf3bef4ceac061a7c64e17ca18d51412-userdata-shm.mount: Deactivated successfully. Feb 23 18:24:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6379d7e5106125fa6b895ceb6595b341b9f19dae1c1d5b2d24280485db12eda6-userdata-shm.mount: Deactivated successfully. Feb 23 18:24:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:31.552982752Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=8c24dfcc-bab6-4f8b-b9cb-89eb7c4e0b56 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:24:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:34.217003 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:24:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:34.217108 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:24:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:34.217487292Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:34.217547901Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:34.217576050Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:24:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:34.217610238Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:24:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:34.225457668Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/519eb1d9-981e-441b-b22a-c879236e24bc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:24:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:34.225800697Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:24:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:34.226836351Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/bd82112e-f442-428f-b31c-b527f358e285 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:24:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:34.226866476Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:24:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:34.872779 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:24:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:34.872831 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:24:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:39.217278 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:24:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:39.217654306Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:39.217718598Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:24:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:39.223047986Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8a33027c-0c3c-4cf7-9fc4-8c4e1632e321 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:24:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:39.223075618Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:24:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:40.217110 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:24:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:40.217558569Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:40.217644786Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:24:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:40.223352494Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/64484f48-6103-4dbc-bd5f-b8b8f9e3b065 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:24:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:40.223380720Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:24:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:43.216762 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:24:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:43.217226665Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:24:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:43.217324450Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:24:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:43.222898204Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/0a70c40e-000f-4259-831f-2a0774821793 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:24:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:24:43.222934395Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:24:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:44.872908 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:24:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:44.872978 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:24:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:54.872703 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:24:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:24:54.872763 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:24:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:56.291804 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:56.292034 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:56.292238 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:24:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:24:56.292307 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:25:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:04.872038 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:25:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:04.872090 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:25:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:04.872118 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:25:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:04.872662 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:25:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:04.872838 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" gracePeriod=30 Feb 23 18:25:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:04.873053884Z" level=info msg="Stopping container: 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee (timeout: 30s)" id=e462446d-aec1-42d0-b32f-13c6c488451c name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:25:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:08.635057131Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=e462446d-aec1-42d0-b32f-13c6c488451c name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:25:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a282b88719cd6fb117a54fe439e5421fcaf685c51baaed140f2007bd8b055e8e-merged.mount: Deactivated successfully. Feb 23 18:25:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:12.407917682Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=ebc3f5d5-22f3-4bfc-95e6-c57a8b9e24b0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:12.426738977Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=e462446d-aec1-42d0-b32f-13c6c488451c name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:25:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:12.428652917Z" level=info msg="Stopped container 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=e462446d-aec1-42d0-b32f-13c6c488451c name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:25:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:12.429072 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:25:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:16.157190793Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=162edd1c-bba7-4d05-98ca-616eb593f018 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:16.158197 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" exitCode=-1 Feb 23 18:25:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:16.158237 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee} Feb 23 18:25:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:16.158295 2199 scope.go:115] "RemoveContainer" containerID="e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e" Feb 23 18:25:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:17.160222 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:25:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:17.160785 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:25:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:18.217134 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:18.217404 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:18.217581 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:18.217624 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.238851017Z" level=info msg="NetworkStart: stopping network for sandbox 5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.238972271Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/519eb1d9-981e-441b-b22a-c879236e24bc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.238999351Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.239009366Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.239019866Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.241177804Z" level=info msg="NetworkStart: stopping network for sandbox 9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.241312550Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/bd82112e-f442-428f-b31c-b527f358e285 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.241354135Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.241367951Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.241377367Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:25:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:19.907011553Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=0103c6bf-0b94-4cc7-917b-86d4bc00bc95 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:20.188838053Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=35a7b51a-22ca-4bfa-ade5-f51ad6d5247d name=/runtime.v1.ImageService/ImageStatus Feb 23 18:25:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:20.189023789Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=35a7b51a-22ca-4bfa-ade5-f51ad6d5247d name=/runtime.v1.ImageService/ImageStatus Feb 23 18:25:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:23.669100860Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=a55efef7-99af-4ebd-b111-880d570b0a32 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:23.669662030Z" level=info msg="Removing container: e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e" id=25ee7d9b-25a3-42c6-bc78-a2e1e5f77004 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:25:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:24.236357765Z" level=info msg="NetworkStart: stopping network for sandbox f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:25:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:24.236469761Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8a33027c-0c3c-4cf7-9fc4-8c4e1632e321 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:25:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:24.236497009Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:25:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:24.236504503Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:25:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:24.236511126Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:25:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:24.751799 2199 kubelet.go:2235] "SyncLoop DELETE" source="api" pods="[openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug]" Feb 23 18:25:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:24.752146 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug" podUID=faee5572-050c-4fe2-b0a1-1aa9ae48ce75 containerName="container-00" containerID="cri-o://c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa" gracePeriod=30 Feb 23 18:25:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:24.752464503Z" level=info msg="Stopping container: c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa (timeout: 30s)" id=fe6d640e-6dd3-4307-8004-b34ccb6179c4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:25:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:24.757275 2199 kubelet.go:2229] "SyncLoop REMOVE" source="api" pods="[openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug]" Feb 23 18:25:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:24.837344 2199 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Feb 23 18:25:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:25.236642728Z" level=info msg="NetworkStart: stopping network for sandbox 713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:25:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:25.236758239Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/64484f48-6103-4dbc-bd5f-b8b8f9e3b065 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:25:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:25.236784457Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:25:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:25.236791295Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:25:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:25.236797327Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:25:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:26.292223 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:26.292518 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:26.292712 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:26.292742 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:25:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:27.430120517Z" level=warning msg="Failed to find container exit file for e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: timed out waiting for the condition" id=25ee7d9b-25a3-42c6-bc78-a2e1e5f77004 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:25:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:27.453962634Z" level=info msg="Removed container e34dd1ed63fc3d49653c21fcfb2b0f6b75558648db1a34ed76f7ef5ef99d6e1e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=25ee7d9b-25a3-42c6-bc78-a2e1e5f77004 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:25:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:28.235080130Z" level=info msg="NetworkStart: stopping network for sandbox 332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:25:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:28.235200665Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/0a70c40e-000f-4259-831f-2a0774821793 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:25:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:28.235228912Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:25:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:28.235235851Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:25:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:28.235265491Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:25:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:28.502996973Z" level=warning msg="Failed to find container exit file for c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: timed out waiting for the condition" id=fe6d640e-6dd3-4307-8004-b34ccb6179c4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:25:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2dcdab19dee872750e6be446e3ac5a3b5e12b0a7b1c6682d63043954d934aca3-merged.mount: Deactivated successfully. Feb 23 18:25:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:31.928165495Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=b6f3b01f-3b4a-47fa-bf20-858dc76009e0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:32.216847 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:25:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:32.217470 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:25:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:32.277699569Z" level=warning msg="Failed to find container exit file for c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: timed out waiting for the condition" id=fe6d640e-6dd3-4307-8004-b34ccb6179c4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:25:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:32.286125346Z" level=info msg="Stopped container c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=fe6d640e-6dd3-4307-8004-b34ccb6179c4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:25:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:32.286662247Z" level=info msg="Stopping pod sandbox: 7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1" id=be854531-1a9e-4eb4-a7aa-eebe8246a448 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 18:25:32 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f6de8c462869bc63fa614dde5393f14eb0eb31adb503b730bd3879aa99faf1da-merged.mount: Deactivated successfully. Feb 23 18:25:32 ip-10-0-136-68 systemd[1]: run-utsns-6ad974c1\x2d756d\x2d488c\x2d9bbd\x2d5092818bf273.mount: Deactivated successfully. Feb 23 18:25:32 ip-10-0-136-68 systemd[1]: run-ipcns-6ad974c1\x2d756d\x2d488c\x2d9bbd\x2d5092818bf273.mount: Deactivated successfully. Feb 23 18:25:32 ip-10-0-136-68 systemd[1]: run-netns-6ad974c1\x2d756d\x2d488c\x2d9bbd\x2d5092818bf273.mount: Deactivated successfully. Feb 23 18:25:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:32.333411785Z" level=info msg="Stopped pod sandbox: 7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1" id=be854531-1a9e-4eb4-a7aa-eebe8246a448 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 18:25:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:36.102059320Z" level=warning msg="Failed to find container exit file for c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: timed out waiting for the condition" id=4d613030-a913-48b1-94c9-396d7e930226 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:36.290113 2199 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-host" (OuterVolumeSpecName: "host") pod "faee5572-050c-4fe2-b0a1-1aa9ae48ce75" (UID: "faee5572-050c-4fe2-b0a1-1aa9ae48ce75"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:25:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:36.290114 2199 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-host\") pod \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\" (UID: \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\") " Feb 23 18:25:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:36.290210 2199 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tdn6\" (UniqueName: \"kubernetes.io/projected/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-kube-api-access-7tdn6\") pod \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\" (UID: \"faee5572-050c-4fe2-b0a1-1aa9ae48ce75\") " Feb 23 18:25:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:36.290310 2199 reconciler_common.go:295] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-host\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 18:25:36 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-faee5572\x2d050c\x2d4fe2\x2db0a1\x2d1aa9ae48ce75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7tdn6.mount: Deactivated successfully. Feb 23 18:25:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:36.298583 2199 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-kube-api-access-7tdn6" (OuterVolumeSpecName: "kube-api-access-7tdn6") pod "faee5572-050c-4fe2-b0a1-1aa9ae48ce75" (UID: "faee5572-050c-4fe2-b0a1-1aa9ae48ce75"). InnerVolumeSpecName "kube-api-access-7tdn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:25:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:36.391185 2199 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-7tdn6\" (UniqueName: \"kubernetes.io/projected/faee5572-050c-4fe2-b0a1-1aa9ae48ce75-kube-api-access-7tdn6\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 18:25:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:36.680416415Z" level=warning msg="Failed to find container exit file for c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: timed out waiting for the condition" id=5ce7d34f-a340-4a03-8511-3edf486172f7 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:36.686792 2199 generic.go:332] "Generic (PLEG): container finished" podID=faee5572-050c-4fe2-b0a1-1aa9ae48ce75 containerID="c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa" exitCode=-1 Feb 23 18:25:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:36.686840 2199 scope.go:115] "RemoveContainer" containerID="c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa" Feb 23 18:25:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:37.688807 2199 status_manager.go:678] "Status for pod is up-to-date; skipping" podUID=faee5572-050c-4fe2-b0a1-1aa9ae48ce75 Feb 23 18:25:37 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-besteffort-podfaee5572_050c_4fe2_b0a1_1aa9ae48ce75.slice. Feb 23 18:25:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:37.693544 2199 status_manager.go:678] "Status for pod is up-to-date; skipping" podUID=faee5572-050c-4fe2-b0a1-1aa9ae48ce75 Feb 23 18:25:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:38.219781 2199 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=faee5572-050c-4fe2-b0a1-1aa9ae48ce75 path="/var/lib/kubelet/pods/faee5572-050c-4fe2-b0a1-1aa9ae48ce75/volumes" Feb 23 18:25:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:40.435034789Z" level=warning msg="Failed to find container exit file for c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: timed out waiting for the condition" id=70d0369f-1a9f-47c3-946f-ab2fd3672137 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:44.202240763Z" level=warning msg="Failed to find container exit file for c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: timed out waiting for the condition" id=20bd2459-0dcd-43a7-bf7b-1de9ce7ca40a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:25:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:44.209183815Z" level=info msg="Removing container: c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa" id=ae4f860c-3a18-4f57-b826-4675bca80755 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:25:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:25:44.231993039Z" level=info msg="Removed container c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa: openshift-debug-zszpb/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=ae4f860c-3a18-4f57-b826-4675bca80755 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:25:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:44.232189 2199 scope.go:115] "RemoveContainer" containerID="c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa" Feb 23 18:25:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:44.232486 2199 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa\": container with ID starting with c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa not found: ID does not exist" containerID="c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa" Feb 23 18:25:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:44.232519 2199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa} err="failed to get container status \"c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa\": rpc error: code = NotFound desc = could not find container \"c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa\": container with ID starting with c8b13879970264fd3c559135dc905cdfa0c6887cc2b55b9538da744ea93bb7fa not found: ID does not exist" Feb 23 18:25:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:45.216673 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:25:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:45.217073 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:25:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:56.291963 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:56.292194 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:56.292499 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:25:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:56.292527 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:25:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:25:58.216595 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:25:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:25:58.217177 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.249977845Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.250032009Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.250584258Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.250627501Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 systemd[1]: run-utsns-bd82112e\x2df442\x2d428f\x2db31c\x2db527f358e285.mount: Deactivated successfully. Feb 23 18:26:04 ip-10-0-136-68 systemd[1]: run-utsns-519eb1d9\x2d981e\x2d441b\x2db22a\x2dc879236e24bc.mount: Deactivated successfully. Feb 23 18:26:04 ip-10-0-136-68 systemd[1]: run-ipcns-bd82112e\x2df442\x2d428f\x2db31c\x2db527f358e285.mount: Deactivated successfully. Feb 23 18:26:04 ip-10-0-136-68 systemd[1]: run-ipcns-519eb1d9\x2d981e\x2d441b\x2db22a\x2dc879236e24bc.mount: Deactivated successfully. Feb 23 18:26:04 ip-10-0-136-68 systemd[1]: run-netns-519eb1d9\x2d981e\x2d441b\x2db22a\x2dc879236e24bc.mount: Deactivated successfully. Feb 23 18:26:04 ip-10-0-136-68 systemd[1]: run-netns-bd82112e\x2df442\x2d428f\x2db31c\x2db527f358e285.mount: Deactivated successfully. Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.274335022Z" level=info msg="runSandbox: deleting pod ID 5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58 from idIndex" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.274372765Z" level=info msg="runSandbox: removing pod sandbox 5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.274401750Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.274426800Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.277325426Z" level=info msg="runSandbox: deleting pod ID 9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e from idIndex" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.277360443Z" level=info msg="runSandbox: removing pod sandbox 9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.277384850Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.277397229Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.279301926Z" level=info msg="runSandbox: removing pod sandbox from storage: 5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.280813024Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.280842557Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=eb79874f-803f-42be-9858-881b0669e9d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:04.281066 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:26:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:04.281123 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:26:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:04.281147 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:26:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:04.281204 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.285329779Z" level=info msg="runSandbox: removing pod sandbox from storage: 9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.286719345Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:04.286746745Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=f9878c44-8d55-47ca-a1f4-367e110360d0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:04.286931 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:26:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:04.286989 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:26:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:04.287024 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:26:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:04.287099 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:26:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9edc0ed03f0cea715beae84f88d26e12efb25f888782c102b3a3a68f5329a34e-userdata-shm.mount: Deactivated successfully. Feb 23 18:26:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5c3b3fe31b448fe061366c8c8a02b46f23a75bdfdeb5809044b6e48075a8bb58-userdata-shm.mount: Deactivated successfully. Feb 23 18:26:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:09.217347 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:26:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:09.217746 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.245969657Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.246015557Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 systemd[1]: run-utsns-8a33027c\x2d0c3c\x2d4cf7\x2d9fc4\x2d8c4e1632e321.mount: Deactivated successfully. Feb 23 18:26:09 ip-10-0-136-68 systemd[1]: run-ipcns-8a33027c\x2d0c3c\x2d4cf7\x2d9fc4\x2d8c4e1632e321.mount: Deactivated successfully. Feb 23 18:26:09 ip-10-0-136-68 systemd[1]: run-netns-8a33027c\x2d0c3c\x2d4cf7\x2d9fc4\x2d8c4e1632e321.mount: Deactivated successfully. Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.271343794Z" level=info msg="runSandbox: deleting pod ID f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6 from idIndex" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.271388992Z" level=info msg="runSandbox: removing pod sandbox f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.271447483Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.271469022Z" level=info msg="runSandbox: unmounting shmPath for sandbox f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6-userdata-shm.mount: Deactivated successfully. Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.278303548Z" level=info msg="runSandbox: removing pod sandbox from storage: f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.279829028Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:09.279857358Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=da689e9c-9f02-4d69-9c66-c9d0d791068b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:09.280051 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:26:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:09.280102 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:26:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:09.280125 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:26:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:09.280180 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f5890a44d31d59e7390315a175b6fe8648f8558a86b6aafa62d5377c8c47b1b6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.245936939Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.245992512Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 systemd[1]: run-utsns-64484f48\x2d6103\x2d4dbc\x2dbd5f\x2db8b8f9e3b065.mount: Deactivated successfully. Feb 23 18:26:10 ip-10-0-136-68 systemd[1]: run-ipcns-64484f48\x2d6103\x2d4dbc\x2dbd5f\x2db8b8f9e3b065.mount: Deactivated successfully. Feb 23 18:26:10 ip-10-0-136-68 systemd[1]: run-netns-64484f48\x2d6103\x2d4dbc\x2dbd5f\x2db8b8f9e3b065.mount: Deactivated successfully. Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.275377421Z" level=info msg="runSandbox: deleting pod ID 713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0 from idIndex" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.275428063Z" level=info msg="runSandbox: removing pod sandbox 713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.275458756Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.275473357Z" level=info msg="runSandbox: unmounting shmPath for sandbox 713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0-userdata-shm.mount: Deactivated successfully. Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.281317089Z" level=info msg="runSandbox: removing pod sandbox from storage: 713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.282926719Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:10.282959457Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=44bbc029-e10b-4420-a3b6-45b203dd4859 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:10.283179 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:26:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:10.283318 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:26:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:10.283355 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:26:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:10.283421 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(713fcc4dd9aa3946ce0e737c2a8a668b7a6f5f9d4c23a489b9323358d63a7ed0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.244612951Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.244665569Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 systemd[1]: run-utsns-0a70c40e\x2d000f\x2d4259\x2d831f\x2d2a0774821793.mount: Deactivated successfully. Feb 23 18:26:13 ip-10-0-136-68 systemd[1]: run-ipcns-0a70c40e\x2d000f\x2d4259\x2d831f\x2d2a0774821793.mount: Deactivated successfully. Feb 23 18:26:13 ip-10-0-136-68 systemd[1]: run-netns-0a70c40e\x2d000f\x2d4259\x2d831f\x2d2a0774821793.mount: Deactivated successfully. Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.260327325Z" level=info msg="runSandbox: deleting pod ID 332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd from idIndex" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.260369838Z" level=info msg="runSandbox: removing pod sandbox 332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.260409708Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.260431855Z" level=info msg="runSandbox: unmounting shmPath for sandbox 332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd-userdata-shm.mount: Deactivated successfully. Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.276322827Z" level=info msg="runSandbox: removing pod sandbox from storage: 332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.277915328Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:13.277946187Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=528b7295-50c5-4864-bf52-d0801535a510 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:13.278152 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:26:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:13.278214 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:26:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:13.278288 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:26:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:13.278348 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(332d98b30857571a515036a516fd045e8470c7e4a918d28399cdbe83262922bd): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:26:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:17.217115 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:26:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:17.217559240Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:17.217638373Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:26:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:17.223470625Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/cd41a400-3973-4312-820f-033cdf88f28f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:26:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:17.223498212Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:26:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:19.217183 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:26:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:19.217523 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:19.217697825Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:19.217758743Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:26:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:19.218309 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:19.218568 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:19.218617 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:19.222949721Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ace61e35-ed4a-40f7-a078-8ff7c5a9f9bf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:19.222974033Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:26:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:22.217359 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:26:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:22.217784 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:26:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:23.216620 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:26:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:23.216620 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.217057939Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.217117503Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.217058189Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.217213193Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.224624216Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/e844b846-39a8-4643-b3ef-a98bdf53caee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.224652200Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.225078823Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/947efb3e-d4ed-4147-87fa-4e436cc47b5d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.225102456Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.843753284Z" level=info msg="Stopping pod sandbox: 7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1" id=77895a2c-192f-4e1b-a75a-9a2ab5419a8b name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.843807450Z" level=info msg="Stopped pod sandbox (already stopped): 7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1" id=77895a2c-192f-4e1b-a75a-9a2ab5419a8b name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.844099695Z" level=info msg="Removing pod sandbox: 7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1" id=c4c9a9d5-b73f-4d3b-9e35-ab8096660d49 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 18:26:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:23.846209117Z" level=info msg="Removed pod sandbox: 7e9cd4cb95592cb8be9f646e7a64e67231f77d052c753ea2192ff3df0d1145d1" id=c4c9a9d5-b73f-4d3b-9e35-ab8096660d49 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 18:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:26.292271 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:26.292581 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:26.292810 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:26.292844 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:26:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:28.216698 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:26:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:28.217147726Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:26:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:28.217221408Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:26:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:28.222779079Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/21366f7c-6467-4e97-8362-06cba3b1e413 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:26:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:26:28.222806243Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:26:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:37.217211 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:26:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:37.217645 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:26:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:26:48.217168 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:26:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:48.217819 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:26:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:56.291988 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:56.292320 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:56.292544 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:26:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:26:56.292590 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:27:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:27:00.217161 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:27:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:00.217591 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:27:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:02.234961177Z" level=info msg="NetworkStart: stopping network for sandbox 3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:02.235086260Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/cd41a400-3973-4312-820f-033cdf88f28f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:27:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:02.235126118Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:27:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:02.235138716Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:27:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:02.235149909Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:27:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:04.234223772Z" level=info msg="NetworkStart: stopping network for sandbox 9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:04.234356325Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ace61e35-ed4a-40f7-a078-8ff7c5a9f9bf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:27:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:04.234384124Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:27:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:04.234390918Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:27:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:04.234397198Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238169473Z" level=info msg="NetworkStart: stopping network for sandbox 1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238304234Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/947efb3e-d4ed-4147-87fa-4e436cc47b5d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238340277Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238349511Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238356129Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238737851Z" level=info msg="NetworkStart: stopping network for sandbox 504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238834077Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/e844b846-39a8-4643-b3ef-a98bdf53caee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238868907Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238880552Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:27:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:08.238890843Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:27:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:13.234788255Z" level=info msg="NetworkStart: stopping network for sandbox 13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:13.234909493Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/21366f7c-6467-4e97-8362-06cba3b1e413 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:27:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:13.234937497Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:27:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:13.235106309Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:27:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:13.235120952Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:27:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:27:15.216628 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:27:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:15.217007 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:27:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:27:26.217290 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:26.217892 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:26.292342 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:26.292638 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:26.292881 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:26.292911 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:27:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:27:37.216576 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:27:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:37.216945 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:27:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:40.218110 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:40.218452 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:40.218696 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:40.218731 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.245188701Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.245232574Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 systemd[1]: run-utsns-cd41a400\x2d3973\x2d4312\x2d820f\x2d033cdf88f28f.mount: Deactivated successfully. Feb 23 18:27:47 ip-10-0-136-68 systemd[1]: run-ipcns-cd41a400\x2d3973\x2d4312\x2d820f\x2d033cdf88f28f.mount: Deactivated successfully. Feb 23 18:27:47 ip-10-0-136-68 systemd[1]: run-netns-cd41a400\x2d3973\x2d4312\x2d820f\x2d033cdf88f28f.mount: Deactivated successfully. Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.265320887Z" level=info msg="runSandbox: deleting pod ID 3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84 from idIndex" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.265358006Z" level=info msg="runSandbox: removing pod sandbox 3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.265386862Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.265400907Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84-userdata-shm.mount: Deactivated successfully. Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.277319700Z" level=info msg="runSandbox: removing pod sandbox from storage: 3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.278873843Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:47.278904480Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9e9d9c9f-f6d5-4d89-a40a-61f06c7b50b2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:47.279140 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:27:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:47.279201 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:27:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:47.279225 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:27:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:47.279313 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a25f221f2cd51faad26e93a1692eacbe3adc6ee5e3428819f10dd634f2c1f84): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:27:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:27:49.216739 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:27:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:49.217110 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.243752502Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.243804231Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 systemd[1]: run-utsns-ace61e35\x2ded4a\x2d40f7\x2da078\x2d8ff7c5a9f9bf.mount: Deactivated successfully. Feb 23 18:27:49 ip-10-0-136-68 systemd[1]: run-ipcns-ace61e35\x2ded4a\x2d40f7\x2da078\x2d8ff7c5a9f9bf.mount: Deactivated successfully. Feb 23 18:27:49 ip-10-0-136-68 systemd[1]: run-netns-ace61e35\x2ded4a\x2d40f7\x2da078\x2d8ff7c5a9f9bf.mount: Deactivated successfully. Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.266320268Z" level=info msg="runSandbox: deleting pod ID 9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566 from idIndex" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.266352257Z" level=info msg="runSandbox: removing pod sandbox 9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.266382179Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.266397750Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566-userdata-shm.mount: Deactivated successfully. Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.275319749Z" level=info msg="runSandbox: removing pod sandbox from storage: 9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.276899365Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:49.276926761Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=c6a5bdf1-1265-479f-a122-5f20e4c42939 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:49.277133 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:27:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:49.277188 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:27:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:49.277211 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:27:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:49.277294 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9426df54d00e527cd1afdbbd682233467962a14d56e80a57754111d6f3b78566): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.249554536Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.249616012Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.251708106Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.251738261Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 systemd[1]: run-utsns-947efb3e\x2dd4ed\x2d4147\x2d87fa\x2d4e436cc47b5d.mount: Deactivated successfully. Feb 23 18:27:53 ip-10-0-136-68 systemd[1]: run-utsns-e844b846\x2d39a8\x2d4643\x2db3ef\x2da98bdf53caee.mount: Deactivated successfully. Feb 23 18:27:53 ip-10-0-136-68 systemd[1]: run-ipcns-947efb3e\x2dd4ed\x2d4147\x2d87fa\x2d4e436cc47b5d.mount: Deactivated successfully. Feb 23 18:27:53 ip-10-0-136-68 systemd[1]: run-ipcns-e844b846\x2d39a8\x2d4643\x2db3ef\x2da98bdf53caee.mount: Deactivated successfully. Feb 23 18:27:53 ip-10-0-136-68 systemd[1]: run-netns-947efb3e\x2dd4ed\x2d4147\x2d87fa\x2d4e436cc47b5d.mount: Deactivated successfully. Feb 23 18:27:53 ip-10-0-136-68 systemd[1]: run-netns-e844b846\x2d39a8\x2d4643\x2db3ef\x2da98bdf53caee.mount: Deactivated successfully. Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.267328382Z" level=info msg="runSandbox: deleting pod ID 504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09 from idIndex" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.267369236Z" level=info msg="runSandbox: removing pod sandbox 504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.267413185Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.267433618Z" level=info msg="runSandbox: unmounting shmPath for sandbox 504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.267332026Z" level=info msg="runSandbox: deleting pod ID 1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1 from idIndex" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.267502952Z" level=info msg="runSandbox: removing pod sandbox 1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.267530452Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.267546639Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.272298493Z" level=info msg="runSandbox: removing pod sandbox from storage: 504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.273308549Z" level=info msg="runSandbox: removing pod sandbox from storage: 1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.273819429Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.273845524Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=75e567d4-3d46-4ede-986d-d88aff6eda02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:53.274078 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:27:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:53.274151 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:27:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:53.274184 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:27:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:53.274330 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.275277511Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:53.275307058Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ce39492d-e9b5-41e1-b05e-816876a2140e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:53.275477 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:27:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:53.275519 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:27:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:53.275539 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:27:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:53.275593 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:27:54 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1fcd40d5cf16c1ce4ce2a921e90eb63e2c7db818f0fb94d7b41915ba1204e9e1-userdata-shm.mount: Deactivated successfully. Feb 23 18:27:54 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-504d3597c4eda6f8de02314d2f8e0f2809fffb4a84731ff61005680888abaf09-userdata-shm.mount: Deactivated successfully. Feb 23 18:27:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:56.291988 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:56.292317 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:56.292557 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:27:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:56.292588 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.247647205Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.247695159Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 systemd[1]: run-utsns-21366f7c\x2d6467\x2d4e97\x2d8362\x2d06cba3b1e413.mount: Deactivated successfully. Feb 23 18:27:58 ip-10-0-136-68 systemd[1]: run-ipcns-21366f7c\x2d6467\x2d4e97\x2d8362\x2d06cba3b1e413.mount: Deactivated successfully. Feb 23 18:27:58 ip-10-0-136-68 systemd[1]: run-netns-21366f7c\x2d6467\x2d4e97\x2d8362\x2d06cba3b1e413.mount: Deactivated successfully. Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.276329454Z" level=info msg="runSandbox: deleting pod ID 13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00 from idIndex" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.276367604Z" level=info msg="runSandbox: removing pod sandbox 13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.276409534Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.276431447Z" level=info msg="runSandbox: unmounting shmPath for sandbox 13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00-userdata-shm.mount: Deactivated successfully. Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.282295898Z" level=info msg="runSandbox: removing pod sandbox from storage: 13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.283700112Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:27:58.283732855Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=3bcdd796-99ed-455e-ab45-471ec5dc248e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:27:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:58.283926 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:27:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:58.283977 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:27:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:58.284008 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:27:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:27:58.284067 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(13cdc5b7eb8a88addece7b74c4b4077f38b2eeb8d95c77ca69c2bc5ae56dfa00): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:28:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:01.217237 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:01.217648383Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:01.217710352Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:01.223117738Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/25076d89-530d-48d8-b3fb-c3c3376b2a2a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:01.223323986Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:04.216549 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:28:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:04.216588 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:28:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:04.216981764Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:04.217050686Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:28:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:04.217116 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:28:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:04.222376061Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/d90cdabb-b06f-4f20-a713-536eface8cd5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:04.222403025Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:05.216855 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:28:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:05.217257894Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:05.217327375Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:28:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:05.222666838Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2cb11c45-dbe7-43a7-adc2-6b17e260cdc1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:05.222695016Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:08.216984 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:28:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:08.217435401Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:08.217500349Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:28:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:08.222880348Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3402bbe7-42c9-406b-9e33-e6c01c0db452 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:08.222906642Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:12.216930 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:28:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:12.217427611Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:12.217491406Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:28:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:12.223291160Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/93b97a2c-5aed-47b0-9689-2b3a74ef4ad6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:12.223329359Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:16.216624 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:28:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:16.217163 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:28:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:26.292787 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:28:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:26.293113 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:28:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:26.293370 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:28:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:26.293409 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:28:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:31.217169 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:28:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:31.217774 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:28:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:45.216615 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:28:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:45.217160 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:46.235601634Z" level=info msg="NetworkStart: stopping network for sandbox e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:46.235729073Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/25076d89-530d-48d8-b3fb-c3c3376b2a2a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:46.235767139Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:46.235780848Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:46.235791888Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:49.234714577Z" level=info msg="NetworkStart: stopping network for sandbox 83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:49.234840597Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/d90cdabb-b06f-4f20-a713-536eface8cd5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:49.234868354Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:28:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:49.234880200Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:28:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:49.234889120Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:50.236348425Z" level=info msg="NetworkStart: stopping network for sandbox eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:50.236454149Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2cb11c45-dbe7-43a7-adc2-6b17e260cdc1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:50.236492321Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:28:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:50.236501091Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:28:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:50.236507607Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:53.234366338Z" level=info msg="NetworkStart: stopping network for sandbox c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:53.234485093Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3402bbe7-42c9-406b-9e33-e6c01c0db452 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:53.234510410Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:28:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:53.234518711Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:28:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:53.234527940Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:56.292043 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:28:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:56.292359 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:28:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:56.292567 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:28:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:56.292606 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:28:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:57.234752775Z" level=info msg="NetworkStart: stopping network for sandbox 6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:28:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:57.234893152Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/93b97a2c-5aed-47b0-9689-2b3a74ef4ad6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:28:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:57.234937472Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:28:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:57.234948519Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:28:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:28:57.234957912Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:28:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:28:59.217373 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:28:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:28:59.217765 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:29:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:05.217502 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:05.217842 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:05.218081 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:05.218121 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:29:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:11.216692 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:29:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:11.217063 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:29:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:25.216701 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:29:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:25.217092 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:29:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:26.292309 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:26.292575 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:26.292799 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:26.292830 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.245695321Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.245742697Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 systemd[1]: run-utsns-25076d89\x2d530d\x2d48d8\x2db3fb\x2dc3c3376b2a2a.mount: Deactivated successfully. Feb 23 18:29:31 ip-10-0-136-68 systemd[1]: run-ipcns-25076d89\x2d530d\x2d48d8\x2db3fb\x2dc3c3376b2a2a.mount: Deactivated successfully. Feb 23 18:29:31 ip-10-0-136-68 systemd[1]: run-netns-25076d89\x2d530d\x2d48d8\x2db3fb\x2dc3c3376b2a2a.mount: Deactivated successfully. Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.268315483Z" level=info msg="runSandbox: deleting pod ID e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9 from idIndex" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.268350714Z" level=info msg="runSandbox: removing pod sandbox e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.268381992Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.268396623Z" level=info msg="runSandbox: unmounting shmPath for sandbox e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9-userdata-shm.mount: Deactivated successfully. Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.274330472Z" level=info msg="runSandbox: removing pod sandbox from storage: e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.275858297Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:31.275893873Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=70788f8a-3036-4a72-8af7-88c1f7e07d6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:31.276091 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:29:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:31.276148 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:29:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:31.276171 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:29:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:31.276227 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e39f13f1f8149d9bc372cdec9f81ec7b4ea48862b516cbe1fba85de280c1faa9): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.244835508Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.244881524Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 systemd[1]: run-utsns-d90cdabb\x2db06f\x2d4f20\x2da713\x2d536eface8cd5.mount: Deactivated successfully. Feb 23 18:29:34 ip-10-0-136-68 systemd[1]: run-ipcns-d90cdabb\x2db06f\x2d4f20\x2da713\x2d536eface8cd5.mount: Deactivated successfully. Feb 23 18:29:34 ip-10-0-136-68 systemd[1]: run-netns-d90cdabb\x2db06f\x2d4f20\x2da713\x2d536eface8cd5.mount: Deactivated successfully. Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.272323003Z" level=info msg="runSandbox: deleting pod ID 83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3 from idIndex" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.272358895Z" level=info msg="runSandbox: removing pod sandbox 83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.272386007Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.272414040Z" level=info msg="runSandbox: unmounting shmPath for sandbox 83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3-userdata-shm.mount: Deactivated successfully. Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.277304338Z" level=info msg="runSandbox: removing pod sandbox from storage: 83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.278902600Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:34.278937040Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=876a5a90-bf0a-459a-a954-da5261133365 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:34.279125 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:29:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:34.279174 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:29:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:34.279196 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:29:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:34.279332 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(83e23a57d1c6c56e1f6adead576c786870ac84ec596b521c782daa14b9ad5de3): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.246453037Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.246505558Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 systemd[1]: run-utsns-2cb11c45\x2ddbe7\x2d43a7\x2dadc2\x2d6b17e260cdc1.mount: Deactivated successfully. Feb 23 18:29:35 ip-10-0-136-68 systemd[1]: run-ipcns-2cb11c45\x2ddbe7\x2d43a7\x2dadc2\x2d6b17e260cdc1.mount: Deactivated successfully. Feb 23 18:29:35 ip-10-0-136-68 systemd[1]: run-netns-2cb11c45\x2ddbe7\x2d43a7\x2dadc2\x2d6b17e260cdc1.mount: Deactivated successfully. Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.279326130Z" level=info msg="runSandbox: deleting pod ID eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15 from idIndex" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.279364216Z" level=info msg="runSandbox: removing pod sandbox eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.279412914Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.279433650Z" level=info msg="runSandbox: unmounting shmPath for sandbox eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15-userdata-shm.mount: Deactivated successfully. Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.285301852Z" level=info msg="runSandbox: removing pod sandbox from storage: eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.286829039Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:35.286867090Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=958642a9-6890-422d-8dce-08db10757ca6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:35.287078 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:29:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:35.287143 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:29:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:35.287186 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:29:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:35.287286 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eb90fb164f71740401766d9513408cbddab3a962a98d7cf187026900d33dcb15): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:29:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:37.217094 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:29:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:37.217534 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.243636619Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.243678270Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 systemd[1]: run-utsns-3402bbe7\x2d42c9\x2d406b\x2d9e33\x2de6c01c0db452.mount: Deactivated successfully. Feb 23 18:29:38 ip-10-0-136-68 systemd[1]: run-ipcns-3402bbe7\x2d42c9\x2d406b\x2d9e33\x2de6c01c0db452.mount: Deactivated successfully. Feb 23 18:29:38 ip-10-0-136-68 systemd[1]: run-netns-3402bbe7\x2d42c9\x2d406b\x2d9e33\x2de6c01c0db452.mount: Deactivated successfully. Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.272329283Z" level=info msg="runSandbox: deleting pod ID c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8 from idIndex" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.272369899Z" level=info msg="runSandbox: removing pod sandbox c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.272412621Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.272426407Z" level=info msg="runSandbox: unmounting shmPath for sandbox c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8-userdata-shm.mount: Deactivated successfully. Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.279309144Z" level=info msg="runSandbox: removing pod sandbox from storage: c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.280809160Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:38.280837932Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1f7d148a-5cce-4b27-8e6e-131a54e24570 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:38.281038 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:29:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:38.281109 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:29:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:38.281150 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:29:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:38.281235 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c8549429baffa2eb1874c9c33f171cb586d1e6391178bbea8e3cddda067eddd8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.245292165Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.245335637Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 systemd[1]: run-utsns-93b97a2c\x2d5aed\x2d47b0\x2d9689\x2d2b3a74ef4ad6.mount: Deactivated successfully. Feb 23 18:29:42 ip-10-0-136-68 systemd[1]: run-ipcns-93b97a2c\x2d5aed\x2d47b0\x2d9689\x2d2b3a74ef4ad6.mount: Deactivated successfully. Feb 23 18:29:42 ip-10-0-136-68 systemd[1]: run-netns-93b97a2c\x2d5aed\x2d47b0\x2d9689\x2d2b3a74ef4ad6.mount: Deactivated successfully. Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.277341380Z" level=info msg="runSandbox: deleting pod ID 6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43 from idIndex" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.277383533Z" level=info msg="runSandbox: removing pod sandbox 6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.277427399Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.277443119Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43-userdata-shm.mount: Deactivated successfully. Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.284309834Z" level=info msg="runSandbox: removing pod sandbox from storage: 6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.285751002Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:42.285780886Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=ad9563af-48dc-496f-930a-57c65250166a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:42.286053 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:29:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:42.286122 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:29:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:42.286162 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:29:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:42.286278 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6470abf7be75833c6b5acdd3feb73b906b95e94e542dea40f2cdfe454207fd43): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:29:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:43.216748 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:43.217097502Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:43.217161365Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:43.222310678Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7134110e-74d6-40e8-9686-f8eabf867d25 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:43.222345403Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:29:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:48.216631 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:29:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:48.217007327Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:48.217082271Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:29:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:48.226733238Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/365e20cd-e9a3-49ef-8dae-570daefc1963 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:29:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:48.226770313Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:29:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:49.216934 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:29:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:49.217329402Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:49.217393342Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:29:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:49.222606540Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/914221ab-8c17-48fa-882c-4b0c20f3b50a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:29:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:49.222632230Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:29:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:51.217338 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:29:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:51.217462 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:29:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:51.217836082Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:51.217899638Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:29:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:51.217967 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:29:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:51.223207962Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/30d69e00-1dbf-4c8f-abd6-672c727e92bf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:29:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:51.223231585Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:29:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:29:54.216842 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:54.217231224Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:54.217337555Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:54.222787676Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4c0a1ca5-1ac0-407b-b9cb-776f432784c7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:29:54.222812356Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:29:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:56.291986 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:56.292219 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:56.292475 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:29:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:29:56.292497 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:30:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:02.217199 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:30:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:02.218021 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:30:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:13.217408 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.218191986Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=1c4e4d4e-3a32-4649-bdf9-510a3b543db2 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.218400960Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=1c4e4d4e-3a32-4649-bdf9-510a3b543db2 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.218946353Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=67c8227f-1aa3-4f62-bf50-0a5f3a56a185 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.219077092Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=67c8227f-1aa3-4f62-bf50-0a5f3a56a185 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.219677487Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=661d6068-46be-474b-8df4-074587e32282 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.219777957Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:30:13 ip-10-0-136-68 systemd[1]: Started crio-conmon-50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52.scope. Feb 23 18:30:13 ip-10-0-136-68 systemd[1]: Started libcontainer container 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52. Feb 23 18:30:13 ip-10-0-136-68 conmon[9118]: conmon 50f4b97809068651a768 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:30:13 ip-10-0-136-68 systemd[1]: crio-conmon-50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52.scope: Deactivated successfully. Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.358359616Z" level=info msg="Created container 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=661d6068-46be-474b-8df4-074587e32282 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.358792171Z" level=info msg="Starting container: 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52" id=6fe9fc04-6ccc-4b2b-be84-e1e689468fc1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:30:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:13.365568022Z" level=info msg="Started container" PID=9130 containerID=50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=6fe9fc04-6ccc-4b2b-be84-e1e689468fc1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:30:13 ip-10-0-136-68 systemd[1]: crio-50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52.scope: Deactivated successfully. Feb 23 18:30:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:17.867155154Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=15eb0fd8-44b3-4c2e-a6b5-29ba83d314ea name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:30:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:17.868106 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52} Feb 23 18:30:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:20.191588640Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=2dddb34b-65f9-4a70-b7dc-559d4b9d5879 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:30:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:20.191780693Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=2dddb34b-65f9-4a70-b7dc-559d4b9d5879 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:30:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:23.217039 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:23.217347 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:23.217639 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:23.217678 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:30:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:24.872832 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:30:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:24.872893 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:30:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:26.292034 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:26.292324 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:26.292602 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:26.292629 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:28.233566051Z" level=info msg="NetworkStart: stopping network for sandbox 5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:28.233687389Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7134110e-74d6-40e8-9686-f8eabf867d25 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:28.233718210Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:28.233725002Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:28.233731340Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:30:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:33.238198604Z" level=info msg="NetworkStart: stopping network for sandbox 62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:30:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:33.238367870Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/365e20cd-e9a3-49ef-8dae-570daefc1963 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:30:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:33.238408960Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:30:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:33.238420299Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:30:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:33.238431550Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:30:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:34.234156282Z" level=info msg="NetworkStart: stopping network for sandbox 82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:30:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:34.234293672Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/914221ab-8c17-48fa-882c-4b0c20f3b50a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:30:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:34.234323422Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:30:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:34.234331004Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:30:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:34.234337066Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:30:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:34.872407 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:30:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:34.872470 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:30:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:36.234644920Z" level=info msg="NetworkStart: stopping network for sandbox f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:30:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:36.234765214Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/30d69e00-1dbf-4c8f-abd6-672c727e92bf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:30:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:36.234804321Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:30:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:36.234818125Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:30:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:36.234830856Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:30:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:39.236082821Z" level=info msg="NetworkStart: stopping network for sandbox b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:30:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:39.236209285Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4c0a1ca5-1ac0-407b-b9cb-776f432784c7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:30:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:39.236239003Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:30:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:39.236270949Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:30:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:30:39.236278363Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:30:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:44.872072 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:30:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:44.872123 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:30:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:54.872794 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:30:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:30:54.872860 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:30:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:56.291907 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:56.292156 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:56.292401 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:30:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:30:56.292426 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:31:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:04.872672 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:31:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:04.872733 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:31:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:04.872765 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:31:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:04.873378 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:31:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:04.873556 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52" gracePeriod=30 Feb 23 18:31:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:04.873813944Z" level=info msg="Stopping container: 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52 (timeout: 30s)" id=1d64b7a9-d19f-4948-a778-417b327544cc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:31:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:08.636204959Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=1d64b7a9-d19f-4948-a778-417b327544cc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:31:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-6da99432e51f6d7df9227fc922a809001aaf251ec738684ae6e14034b9bec189-merged.mount: Deactivated successfully. Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.403937175Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=1d64b7a9-d19f-4948-a778-417b327544cc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.405503141Z" level=info msg="Stopped container 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=1d64b7a9-d19f-4948-a778-417b327544cc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.406184836Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=26f2d341-ae67-4139-8e18-cc6c07ebd446 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.406438411Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=26f2d341-ae67-4139-8e18-cc6c07ebd446 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.407001934Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=b23754f2-f8da-4f38-bdfc-8c5714705d0d name=/runtime.v1.ImageService/ImageStatus Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.407195831Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=b23754f2-f8da-4f38-bdfc-8c5714705d0d name=/runtime.v1.ImageService/ImageStatus Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.407835378Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=27c34a2e-61c2-416b-b51e-579d3a0f9bc7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.407948151Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:31:12 ip-10-0-136-68 systemd[1]: Started crio-conmon-4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb.scope. Feb 23 18:31:12 ip-10-0-136-68 systemd[1]: Started libcontainer container 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb. Feb 23 18:31:12 ip-10-0-136-68 conmon[9270]: conmon 4ee59bdc119c54bc103d : Failed to write to cgroup.event_control Operation not supported Feb 23 18:31:12 ip-10-0-136-68 systemd[1]: crio-conmon-4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb.scope: Deactivated successfully. Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.542393556Z" level=info msg="Created container 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=27c34a2e-61c2-416b-b51e-579d3a0f9bc7 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.542943754Z" level=info msg="Starting container: 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" id=dc8b59f7-613e-4822-9d0d-758616ddabba name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.549336407Z" level=info msg="Started container" PID=9282 containerID=4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=dc8b59f7-613e-4822-9d0d-758616ddabba name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:31:12 ip-10-0-136-68 systemd[1]: crio-4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb.scope: Deactivated successfully. Feb 23 18:31:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:12.680684880Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=9de2be18-70e6-4dd8-8851-9c4e72d1cc8a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.243713633Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.243763024Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 systemd[1]: run-utsns-7134110e\x2d74d6\x2d40e8\x2d9686\x2df8eabf867d25.mount: Deactivated successfully. Feb 23 18:31:13 ip-10-0-136-68 systemd[1]: run-ipcns-7134110e\x2d74d6\x2d40e8\x2d9686\x2df8eabf867d25.mount: Deactivated successfully. Feb 23 18:31:13 ip-10-0-136-68 systemd[1]: run-netns-7134110e\x2d74d6\x2d40e8\x2d9686\x2df8eabf867d25.mount: Deactivated successfully. Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.280329332Z" level=info msg="runSandbox: deleting pod ID 5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc from idIndex" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.280370005Z" level=info msg="runSandbox: removing pod sandbox 5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.280425309Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.280446643Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.284303221Z" level=info msg="runSandbox: removing pod sandbox from storage: 5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.285963945Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:13.285994034Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=962ee0d3-3931-435e-b4a4-85feaa9f94cb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:13.286208 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:31:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:13.286296 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:31:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:13.286331 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:31:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:13.286411 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:31:13 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5d82919365627faf2430b1e34f1d67c47b51cd2f43507dafd3787d2e1eeafcbc-userdata-shm.mount: Deactivated successfully. Feb 23 18:31:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:16.429919861Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=3a736787-5ed9-40b1-a670-9da70c7044f9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:31:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:16.430796 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52" exitCode=-1 Feb 23 18:31:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:16.430834 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52} Feb 23 18:31:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:16.430863 2199 scope.go:115] "RemoveContainer" containerID="89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.248102741Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.248154297Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 systemd[1]: run-utsns-365e20cd\x2de9a3\x2d49ef\x2d8dae\x2d570daefc1963.mount: Deactivated successfully. Feb 23 18:31:18 ip-10-0-136-68 systemd[1]: run-ipcns-365e20cd\x2de9a3\x2d49ef\x2d8dae\x2d570daefc1963.mount: Deactivated successfully. Feb 23 18:31:18 ip-10-0-136-68 systemd[1]: run-netns-365e20cd\x2de9a3\x2d49ef\x2d8dae\x2d570daefc1963.mount: Deactivated successfully. Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.265329257Z" level=info msg="runSandbox: deleting pod ID 62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f from idIndex" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.265360800Z" level=info msg="runSandbox: removing pod sandbox 62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.265382517Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.265398287Z" level=info msg="runSandbox: unmounting shmPath for sandbox 62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f-userdata-shm.mount: Deactivated successfully. Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.271308147Z" level=info msg="runSandbox: removing pod sandbox from storage: 62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.272887848Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:18.272920795Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=179cfc37-c012-48f7-9a34-69fc68a396ff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:18.273095 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:31:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:18.273157 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:31:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:18.273194 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:31:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:18.273300 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(62cbfea067f546211a74e66b23cf61b455d33d7a98e2395225bbbc88ca44842f): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.244398583Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.244452314Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 systemd[1]: run-utsns-914221ab\x2d8c17\x2d48fa\x2d882c\x2d4b0c20f3b50a.mount: Deactivated successfully. Feb 23 18:31:19 ip-10-0-136-68 systemd[1]: run-ipcns-914221ab\x2d8c17\x2d48fa\x2d882c\x2d4b0c20f3b50a.mount: Deactivated successfully. Feb 23 18:31:19 ip-10-0-136-68 systemd[1]: run-netns-914221ab\x2d8c17\x2d48fa\x2d882c\x2d4b0c20f3b50a.mount: Deactivated successfully. Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.269326088Z" level=info msg="runSandbox: deleting pod ID 82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74 from idIndex" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.269360706Z" level=info msg="runSandbox: removing pod sandbox 82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.269407549Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.269428425Z" level=info msg="runSandbox: unmounting shmPath for sandbox 82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74-userdata-shm.mount: Deactivated successfully. Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.284304067Z" level=info msg="runSandbox: removing pod sandbox from storage: 82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.285886483Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:19.285916325Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=c5e527e3-5578-4bb6-ac16-43da727b8b31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:19.286110 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:31:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:19.286165 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:31:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:19.286188 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:31:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:19.286263 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(82c593b09d3420eb8125551fb07d6d79bd95a5fe792d840bfbd0ede612c21e74): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:31:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:20.192129280Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=4d85498c-0b78-4be3-8e9f-54319e69e0a1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.183882436Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=566bb55d-a3ab-4e01-80ae-d4b2d2962bb3 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.245323167Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.245367568Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 systemd[1]: run-utsns-30d69e00\x2d1dbf\x2d4c8f\x2dabd6\x2d672c727e92bf.mount: Deactivated successfully. Feb 23 18:31:21 ip-10-0-136-68 systemd[1]: run-ipcns-30d69e00\x2d1dbf\x2d4c8f\x2dabd6\x2d672c727e92bf.mount: Deactivated successfully. Feb 23 18:31:21 ip-10-0-136-68 systemd[1]: run-netns-30d69e00\x2d1dbf\x2d4c8f\x2dabd6\x2d672c727e92bf.mount: Deactivated successfully. Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.270338479Z" level=info msg="runSandbox: deleting pod ID f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b from idIndex" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.270370635Z" level=info msg="runSandbox: removing pod sandbox f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.270400107Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.270415633Z" level=info msg="runSandbox: unmounting shmPath for sandbox f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b-userdata-shm.mount: Deactivated successfully. Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.275308566Z" level=info msg="runSandbox: removing pod sandbox from storage: f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.276835370Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:21.276869415Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a3eb212a-127e-482c-be2d-b9de597c6426 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:21.277067 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:31:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:21.277129 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:31:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:21.277172 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:31:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:21.277275 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f937dd1475c822a49cb294127c3e8803800313d42b8e359ae567dfeec2d01b1b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:31:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:23.940682808Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=724e2049-96a4-42af-a2be-58aa208db1ac name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:31:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:23.941054443Z" level=info msg="Removing container: 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee" id=8dec8e95-081b-4808-b8d5-bbfd337f27c2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.246430092Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.246472681Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 systemd[1]: run-utsns-4c0a1ca5\x2d1ac0\x2d407b\x2db9cb\x2d776f432784c7.mount: Deactivated successfully. Feb 23 18:31:24 ip-10-0-136-68 systemd[1]: run-ipcns-4c0a1ca5\x2d1ac0\x2d407b\x2db9cb\x2d776f432784c7.mount: Deactivated successfully. Feb 23 18:31:24 ip-10-0-136-68 systemd[1]: run-netns-4c0a1ca5\x2d1ac0\x2d407b\x2db9cb\x2d776f432784c7.mount: Deactivated successfully. Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.275327515Z" level=info msg="runSandbox: deleting pod ID b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6 from idIndex" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.275361476Z" level=info msg="runSandbox: removing pod sandbox b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.275387043Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.275401730Z" level=info msg="runSandbox: unmounting shmPath for sandbox b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6-userdata-shm.mount: Deactivated successfully. Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.284307203Z" level=info msg="runSandbox: removing pod sandbox from storage: b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.285780706Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.285810131Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=8a89c2de-c30e-4de5-a2d2-0cfa479e028e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:24.285984 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:31:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:24.286031 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:31:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:24.286057 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:31:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:24.286117 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b287f5ff7424cb8ae85e4646c2854f1ec92697404f20ca4bad1d3f3590a0c9d6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:31:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:24.945093384Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=b0ae3565-373c-4683-8b43-221ec5583d79 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:31:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:24.946054 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb} Feb 23 18:31:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:25.216606 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:25.216997682Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:25.217052871Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:25.222787004Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/615d2cf9-e237-4ee1-bdaa-552cc3cf5bc0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:25.222812265Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:31:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:26.292471 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:26.292743 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:26.292960 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:26.292989 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:31:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:27.616940063Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=0dc6866a-d8f3-482b-8538-92a48a29b5d9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:31:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:27.617286 2199 kuberuntime_gc.go:390] "Failed to remove container log dead symlink" err="remove /var/log/containers/aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers_csi-driver-89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee.log: no such file or directory" path="/var/log/containers/aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers_csi-driver-89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee.log" Feb 23 18:31:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:27.687844502Z" level=warning msg="Failed to find container exit file for 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: timed out waiting for the condition" id=8dec8e95-081b-4808-b8d5-bbfd337f27c2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:31:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:27.699873666Z" level=info msg="Removed container 89651d291d2f8f16315faa2d3380eaf0dfe8754303677a7d16bae931d67697ee: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=8dec8e95-081b-4808-b8d5-bbfd337f27c2 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:31:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:31.216684 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:31.217005 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:31.217313 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:31.217354 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:31:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:31.710898915Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=a5b87e3c-caf4-4519-acd2-d7e4e02fa4b9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:31:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:32.216999 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:31:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:32.217530756Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:32.217606263Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:31:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:32.222873306Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/5b25b008-5088-4a34-bc8f-ab2e2b29cb37 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:31:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:32.222897312Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:31:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:34.872105 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:31:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:34.872168 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:31:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:35.216697 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:31:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:35.217132883Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:35.217197267Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:31:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:35.222845767Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/be18ed5b-19d0-4ae3-9f22-037e016174d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:31:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:35.222869433Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:31:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:36.216689 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:31:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:36.216735 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:36.217338267Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:36.217404082Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:36.217453874Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:36.217413775Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:36.224996304Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6531ba69-e770-4d0f-a225-4aeaa328883c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:36.225029002Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:36.224996829Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/2e5987cc-d93f-4d28-bc40-5cbe066f32aa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:31:36.225127257Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:31:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:44.872454 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:31:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:44.872507 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:31:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:54.872773 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:31:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:31:54.872837 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:31:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:56.292066 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:56.292368 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:56.292586 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:31:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:31:56.292628 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:32:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:04.872555 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:32:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:04.872615 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:10.234310785Z" level=info msg="NetworkStart: stopping network for sandbox 9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:10.234437043Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/615d2cf9-e237-4ee1-bdaa-552cc3cf5bc0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:10.234476411Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:10.234487549Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:10.234497891Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:32:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:11.308971546Z" level=info msg="cleanup sandbox network" Feb 23 18:32:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:14.872667 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:32:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:14.872718 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:32:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:14.872740 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:32:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:14.873174 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:32:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:14.873362 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" gracePeriod=30 Feb 23 18:32:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:14.873550787Z" level=info msg="Stopping container: 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb (timeout: 30s)" id=631ca585-a8e1-438e-9780-481405270362 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:32:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:17.234523053Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:32:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:17.234536313Z" level=info msg="NetworkStart: stopping network for sandbox 581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:17.234705636Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:32:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:18.634088535Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=631ca585-a8e1-438e-9780-481405270362 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:32:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-7550d544f53a0eb5bd0b670f4beb179259df4b10f90ba554d6ef534992698006-merged.mount: Deactivated successfully. Feb 23 18:32:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:20.234805430Z" level=info msg="NetworkStart: stopping network for sandbox b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:20.234906535Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/be18ed5b-19d0-4ae3-9f22-037e016174d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:32:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:20.234937202Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:32:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:20.234944355Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:32:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:20.234951511Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.237518381Z" level=info msg="NetworkStart: stopping network for sandbox 2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.237643133Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/2e5987cc-d93f-4d28-bc40-5cbe066f32aa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.237673818Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.237685870Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.237692408Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.238968592Z" level=info msg="NetworkStart: stopping network for sandbox af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.239115508Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6531ba69-e770-4d0f-a225-4aeaa328883c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.239154895Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.239190067Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:32:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:21.239201866Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:32:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:22.395932744Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=631ca585-a8e1-438e-9780-481405270362 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:32:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:22.397461650Z" level=info msg="Stopped container 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=631ca585-a8e1-438e-9780-481405270362 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:32:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:22.397932 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:32:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:22.547027133Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=c5ae6a0a-2624-44f3-9df2-8b94b68cef85 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:26.292343 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:26.292720 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:26.293039 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:26.293082 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:32:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:26.307880377Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=659c6ed7-f6f9-4e25-b14a-e7718553e0fa name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:32:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:26.308679 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" exitCode=-1 Feb 23 18:32:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:26.308709 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb} Feb 23 18:32:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:26.308730 2199 scope.go:115] "RemoveContainer" containerID="50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52" Feb 23 18:32:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:27.310441 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:32:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:27.310878 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:32:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:30.066923369Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=9f23c0be-756e-413d-94d3-ca6fcbfa626f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:32:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:33.815992738Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=1583091c-39c3-4660-9b5c-43f93e929f09 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:32:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:33.816514289Z" level=info msg="Removing container: 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52" id=c80db031-6433-42bb-a4c0-3e56b2e3414b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:32:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:37.578074243Z" level=warning msg="Failed to find container exit file for 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: timed out waiting for the condition" id=c80db031-6433-42bb-a4c0-3e56b2e3414b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:32:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:37.589789834Z" level=info msg="Removed container 50f4b97809068651a7686266292e42f76ca81444d730f04d72ba1cf2aed0ed52: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c80db031-6433-42bb-a4c0-3e56b2e3414b name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:32:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:38.217391 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:32:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:38.217990 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:32:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:42.077980271Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=42877356-ff02-48f5-9f5f-da853263b0eb name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:32:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:32:52.217179 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:32:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:52.217201 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:52.218286 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:52.218330 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:32:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:52.218612 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:52.218677 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.244301917Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.244350001Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 systemd[1]: run-utsns-615d2cf9\x2de237\x2d4ee1\x2dbdaa\x2d552cc3cf5bc0.mount: Deactivated successfully. Feb 23 18:32:55 ip-10-0-136-68 systemd[1]: run-ipcns-615d2cf9\x2de237\x2d4ee1\x2dbdaa\x2d552cc3cf5bc0.mount: Deactivated successfully. Feb 23 18:32:55 ip-10-0-136-68 systemd[1]: run-netns-615d2cf9\x2de237\x2d4ee1\x2dbdaa\x2d552cc3cf5bc0.mount: Deactivated successfully. Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.275377094Z" level=info msg="runSandbox: deleting pod ID 9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed from idIndex" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.275412823Z" level=info msg="runSandbox: removing pod sandbox 9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.275445023Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.275457023Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed-userdata-shm.mount: Deactivated successfully. Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.280316495Z" level=info msg="runSandbox: removing pod sandbox from storage: 9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.281881215Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:32:55.281914356Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=415aa281-9667-44b8-b792-c7d627a7adac name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:32:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:55.282141 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:32:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:55.282196 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:32:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:55.282218 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:32:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:55.282321 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(9cef457df7fca7dff791dc7900d8b3c0bc35e9d65c12972139970b60d251afed): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:32:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:56.292045 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:56.292312 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:56.292550 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:32:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:32:56.292579 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:33:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:02.244893903Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" Feb 23 18:33:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:02.244949132Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/5b25b008-5088-4a34-bc8f-ab2e2b29cb37 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:33:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:02.244987135Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:33:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:02.244994108Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:33:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:02.245002145Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.245169562Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.245216939Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 systemd[1]: run-utsns-be18ed5b\x2d19d0\x2d4ae3\x2d9f22\x2d037e016174d4.mount: Deactivated successfully. Feb 23 18:33:05 ip-10-0-136-68 systemd[1]: run-ipcns-be18ed5b\x2d19d0\x2d4ae3\x2d9f22\x2d037e016174d4.mount: Deactivated successfully. Feb 23 18:33:05 ip-10-0-136-68 systemd[1]: run-netns-be18ed5b\x2d19d0\x2d4ae3\x2d9f22\x2d037e016174d4.mount: Deactivated successfully. Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.271332193Z" level=info msg="runSandbox: deleting pod ID b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30 from idIndex" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.271394561Z" level=info msg="runSandbox: removing pod sandbox b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.271432169Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.271446057Z" level=info msg="runSandbox: unmounting shmPath for sandbox b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30-userdata-shm.mount: Deactivated successfully. Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.277308161Z" level=info msg="runSandbox: removing pod sandbox from storage: b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.278888613Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:05.278923558Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0f8bbdb1-b11d-452c-9bc0-91995f4df1be name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:05.279127 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:33:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:05.279195 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:33:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:05.279234 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:33:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:05.279341 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b7e51a30ae82e24329e0421e5579320371d69c1c18f6411a89b1220384efef30): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.247019312Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.247065781Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.249916129Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.249965729Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 systemd[1]: run-utsns-2e5987cc\x2dd93f\x2d4d28\x2dbc40\x2d5cbe066f32aa.mount: Deactivated successfully. Feb 23 18:33:06 ip-10-0-136-68 systemd[1]: run-utsns-6531ba69\x2de770\x2d4d0f\x2da225\x2d4aeaa328883c.mount: Deactivated successfully. Feb 23 18:33:06 ip-10-0-136-68 systemd[1]: run-ipcns-2e5987cc\x2dd93f\x2d4d28\x2dbc40\x2d5cbe066f32aa.mount: Deactivated successfully. Feb 23 18:33:06 ip-10-0-136-68 systemd[1]: run-ipcns-6531ba69\x2de770\x2d4d0f\x2da225\x2d4aeaa328883c.mount: Deactivated successfully. Feb 23 18:33:06 ip-10-0-136-68 systemd[1]: run-netns-6531ba69\x2de770\x2d4d0f\x2da225\x2d4aeaa328883c.mount: Deactivated successfully. Feb 23 18:33:06 ip-10-0-136-68 systemd[1]: run-netns-2e5987cc\x2dd93f\x2d4d28\x2dbc40\x2d5cbe066f32aa.mount: Deactivated successfully. Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.270319184Z" level=info msg="runSandbox: deleting pod ID af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d from idIndex" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.270350825Z" level=info msg="runSandbox: removing pod sandbox af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.270373282Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.270387195Z" level=info msg="runSandbox: unmounting shmPath for sandbox af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.271315586Z" level=info msg="runSandbox: deleting pod ID 2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a from idIndex" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.271348002Z" level=info msg="runSandbox: removing pod sandbox 2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.271380685Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.271404956Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.276309939Z" level=info msg="runSandbox: removing pod sandbox from storage: 2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.276310130Z" level=info msg="runSandbox: removing pod sandbox from storage: af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.277848145Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.277874893Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1fa19c89-3fa0-4045-9995-83fc8a5d1060 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:06.278166 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:06.278235 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:06.278302 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:06.278362 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.279223475Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:06.279267774Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=d9828f59-4695-4cce-9f81-71d58c691caf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:06.279418 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:06.279458 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:06.279480 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:06.279535 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:33:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:07.217416 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:33:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:07.217867 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:33:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-af23e534c169b5ad0ffe38d9da652dd33ab7b6f7c75776b6a3b6463903ad649d-userdata-shm.mount: Deactivated successfully. Feb 23 18:33:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2afb5d7ed4793df8255e4cedc30c446ba5d68dee33ebe7ce804401884d5f7b4a-userdata-shm.mount: Deactivated successfully. Feb 23 18:33:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:10.216692 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:33:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:10.217011485Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:10.217067724Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:33:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:10.222874222Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/04cd6550-a7a6-4f36-90a7-4ef9184e8a4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:33:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:10.222907835Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.236339725Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): netplugin failed with no error message: signal: killed" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.236384135Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 systemd[1]: run-utsns-5b25b008\x2d5088\x2d4a34\x2dbc8f\x2dab2e2b29cb37.mount: Deactivated successfully. Feb 23 18:33:17 ip-10-0-136-68 systemd[1]: run-ipcns-5b25b008\x2d5088\x2d4a34\x2dbc8f\x2dab2e2b29cb37.mount: Deactivated successfully. Feb 23 18:33:17 ip-10-0-136-68 systemd[1]: run-netns-5b25b008\x2d5088\x2d4a34\x2dbc8f\x2dab2e2b29cb37.mount: Deactivated successfully. Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.266327076Z" level=info msg="runSandbox: deleting pod ID 581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec from idIndex" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.266361380Z" level=info msg="runSandbox: removing pod sandbox 581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.266400599Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.266416468Z" level=info msg="runSandbox: unmounting shmPath for sandbox 581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec-userdata-shm.mount: Deactivated successfully. Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.272302749Z" level=info msg="runSandbox: removing pod sandbox from storage: 581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.273764664Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:17.273792620Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=c670ffd3-c02e-4bb4-878a-58cfbb487af4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:17.273982 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:33:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:17.274038 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:33:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:17.274061 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:33:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:17.274116 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(581a330e8de16c7615eecd8db4a8d1064920787d735104d2da7117ba310921ec): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:33:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:19.216654 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:33:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:19.217021961Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:19.217086357Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:33:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:19.222180388Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5f03364f-478a-4af9-90ce-c0980fc5e02f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:33:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:19.222206558Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:33:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:20.216975 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:33:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:20.217331853Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:20.217386743Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:33:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:20.224279406Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/d04a1090-e1ec-41c8-ba0e-41f696d6c5b3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:33:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:20.224314711Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:33:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:21.217274 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:33:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:21.217379 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:33:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:21.217617502Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:21.217680026Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:33:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:21.217902 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:33:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:21.222492676Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/a6975d81-77c6-4efd-97c2-d673b5c1fc13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:33:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:21.222517059Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:33:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:26.292804 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:33:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:26.293036 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:33:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:26.293326 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:33:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:26.293353 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:33:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:30.217069 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:33:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:30.217495529Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:30.217561528Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:33:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:30.222849243Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/561828d1-3a27-428c-96cc-5bc5acd8ab83 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:33:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:30.222873266Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:33:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:34.217322 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:33:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:34.217923 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:33:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:33:46.216598 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:33:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:46.217182 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:33:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:55.236020609Z" level=info msg="NetworkStart: stopping network for sandbox 66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:33:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:55.236136508Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/04cd6550-a7a6-4f36-90a7-4ef9184e8a4b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:33:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:55.236165154Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:33:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:55.236173358Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:33:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:33:55.236179914Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:33:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:56.291844 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:33:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:56.292051 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:33:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:56.292313 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:33:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:33:56.292340 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:34:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:34:00.217125 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:34:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:00.217760 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:34:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:04.234033673Z" level=info msg="NetworkStart: stopping network for sandbox 4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:04.234138943Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5f03364f-478a-4af9-90ce-c0980fc5e02f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:34:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:04.234165470Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:34:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:04.234173783Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:34:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:04.234183586Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:05.237602586Z" level=info msg="NetworkStart: stopping network for sandbox 905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:05.237719043Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/d04a1090-e1ec-41c8-ba0e-41f696d6c5b3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:05.237754047Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:05.237762664Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:05.237769573Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:34:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:06.234909218Z" level=info msg="NetworkStart: stopping network for sandbox 745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:06.235025878Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/a6975d81-77c6-4efd-97c2-d673b5c1fc13 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:34:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:06.235056247Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:34:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:06.235063493Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:34:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:06.235069853Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:34:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:34:12.216475 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:34:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:12.217038 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:34:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:14.217200 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:14.217921 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:14.218317 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:14.218357 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:34:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:15.235039865Z" level=info msg="NetworkStart: stopping network for sandbox 86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:15.235168547Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/561828d1-3a27-428c-96cc-5bc5acd8ab83 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:34:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:15.235211478Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:34:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:15.235224350Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:34:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:15.235234072Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:34:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:34:25.216961 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:34:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:25.217386 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:34:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:26.292164 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:26.292445 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:26.292668 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:26.292694 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:34:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:34:40.217438 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:34:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:40.218006 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.245109379Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.245157519Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 systemd[1]: run-utsns-04cd6550\x2da7a6\x2d4f36\x2d90a7\x2d4ef9184e8a4b.mount: Deactivated successfully. Feb 23 18:34:40 ip-10-0-136-68 systemd[1]: run-ipcns-04cd6550\x2da7a6\x2d4f36\x2d90a7\x2d4ef9184e8a4b.mount: Deactivated successfully. Feb 23 18:34:40 ip-10-0-136-68 systemd[1]: run-netns-04cd6550\x2da7a6\x2d4f36\x2d90a7\x2d4ef9184e8a4b.mount: Deactivated successfully. Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.268335124Z" level=info msg="runSandbox: deleting pod ID 66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1 from idIndex" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.268370631Z" level=info msg="runSandbox: removing pod sandbox 66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.268395198Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.268411415Z" level=info msg="runSandbox: unmounting shmPath for sandbox 66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1-userdata-shm.mount: Deactivated successfully. Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.276308920Z" level=info msg="runSandbox: removing pod sandbox from storage: 66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.277967649Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:40.277998478Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8fb5e2e7-e4e0-4b4c-b73e-64fd1913e5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:40.278185 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:34:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:40.278235 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:34:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:40.278288 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:34:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:40.278357 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(66b4187273a3ed5062b251910466cf007a184ed17e0e7ef091f27c3125ef22d1): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.244055711Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.244110590Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 systemd[1]: run-utsns-5f03364f\x2d478a\x2d4af9\x2d90ce\x2dc0980fc5e02f.mount: Deactivated successfully. Feb 23 18:34:49 ip-10-0-136-68 systemd[1]: run-ipcns-5f03364f\x2d478a\x2d4af9\x2d90ce\x2dc0980fc5e02f.mount: Deactivated successfully. Feb 23 18:34:49 ip-10-0-136-68 systemd[1]: run-netns-5f03364f\x2d478a\x2d4af9\x2d90ce\x2dc0980fc5e02f.mount: Deactivated successfully. Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.263312505Z" level=info msg="runSandbox: deleting pod ID 4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6 from idIndex" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.263348973Z" level=info msg="runSandbox: removing pod sandbox 4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.263395259Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.263417713Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6-userdata-shm.mount: Deactivated successfully. Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.269302021Z" level=info msg="runSandbox: removing pod sandbox from storage: 4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.270888404Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:49.270917620Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=ca0516b2-e0d1-47f8-a1cd-08b893d0089c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:49.271118 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:34:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:49.271168 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:34:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:49.271194 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:34:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:49.271270 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(4c2c9960205e5e2fb0921ecb6d748ebdc36d4ba1e6203371d736bc491d0311a6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.247754399Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.247801920Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 systemd[1]: run-utsns-d04a1090\x2de1ec\x2d41c8\x2dba0e\x2d41f696d6c5b3.mount: Deactivated successfully. Feb 23 18:34:50 ip-10-0-136-68 systemd[1]: run-ipcns-d04a1090\x2de1ec\x2d41c8\x2dba0e\x2d41f696d6c5b3.mount: Deactivated successfully. Feb 23 18:34:50 ip-10-0-136-68 systemd[1]: run-netns-d04a1090\x2de1ec\x2d41c8\x2dba0e\x2d41f696d6c5b3.mount: Deactivated successfully. Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.261318624Z" level=info msg="runSandbox: deleting pod ID 905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17 from idIndex" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.261354379Z" level=info msg="runSandbox: removing pod sandbox 905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.261379303Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.261391271Z" level=info msg="runSandbox: unmounting shmPath for sandbox 905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17-userdata-shm.mount: Deactivated successfully. Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.270288045Z" level=info msg="runSandbox: removing pod sandbox from storage: 905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.271761735Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:50.271788207Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=9e2f0579-a7be-4ada-8520-b5ea4bfb822b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:50.271950 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:34:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:50.272013 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:34:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:50.272053 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:34:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:50.272131 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(905e1776e5523639c43fdb269157251a47efc414ecd84cd6fb1a4ad3f5076c17): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:34:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:34:51.217187 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.217574426Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.217643802Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.223003578Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7235f409-24ad-435e-b142-7069ea0f836d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.223040226Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.244834487Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.244876127Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 systemd[1]: run-utsns-a6975d81\x2d77c6\x2d4efd\x2d97c2\x2dd673b5c1fc13.mount: Deactivated successfully. Feb 23 18:34:51 ip-10-0-136-68 systemd[1]: run-ipcns-a6975d81\x2d77c6\x2d4efd\x2d97c2\x2dd673b5c1fc13.mount: Deactivated successfully. Feb 23 18:34:51 ip-10-0-136-68 systemd[1]: run-netns-a6975d81\x2d77c6\x2d4efd\x2d97c2\x2dd673b5c1fc13.mount: Deactivated successfully. Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.275330512Z" level=info msg="runSandbox: deleting pod ID 745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262 from idIndex" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.275370214Z" level=info msg="runSandbox: removing pod sandbox 745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.275401687Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.275461388Z" level=info msg="runSandbox: unmounting shmPath for sandbox 745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262-userdata-shm.mount: Deactivated successfully. Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.279298870Z" level=info msg="runSandbox: removing pod sandbox from storage: 745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.280813913Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:34:51.280846671Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=eee2205f-7a2f-492f-a10a-238164387d5a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:34:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:51.281043 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:34:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:51.281104 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:34:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:51.281139 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:34:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:51.281230 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(745e4bccbd34a7bfc9c0c8f0dae5b9bebd9a1666ccf3ebac7515cf532cf59262): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:34:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:34:53.216801 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:34:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:53.217157 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:34:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:56.292056 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:56.292327 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:56.292536 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:34:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:34:56.292565 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.244536395Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.244587707Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 systemd[1]: run-utsns-561828d1\x2d3a27\x2d428c\x2d96cc\x2d5bc5acd8ab83.mount: Deactivated successfully. Feb 23 18:35:00 ip-10-0-136-68 systemd[1]: run-ipcns-561828d1\x2d3a27\x2d428c\x2d96cc\x2d5bc5acd8ab83.mount: Deactivated successfully. Feb 23 18:35:00 ip-10-0-136-68 systemd[1]: run-netns-561828d1\x2d3a27\x2d428c\x2d96cc\x2d5bc5acd8ab83.mount: Deactivated successfully. Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.264326212Z" level=info msg="runSandbox: deleting pod ID 86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be from idIndex" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.264359932Z" level=info msg="runSandbox: removing pod sandbox 86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.264388588Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.264401208Z" level=info msg="runSandbox: unmounting shmPath for sandbox 86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be-userdata-shm.mount: Deactivated successfully. Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.268316730Z" level=info msg="runSandbox: removing pod sandbox from storage: 86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.269774158Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:00.269805780Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=3f1697ef-2d44-4d54-99e8-76087926042e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:00.270017 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:35:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:00.270068 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:35:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:00.270097 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:35:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:00.270152 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(86bc61e410f4753d18589caf7cd991c4daf0369e21d724dc309dc5d2316df1be): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:35:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:35:02.217120 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:35:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:02.217630566Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:02.217706711Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:35:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:02.223440991Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/89850953-8032-43c9-a83c-f69c9a1857ae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:02.223479254Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:35:03.216416 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:35:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:03.216819797Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:03.216883039Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:35:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:03.222037398Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3c2a3ba7-8d93-4742-98ac-ee6ff42f569c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:03.222072181Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:35:05.216601 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:05.216985813Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:05.217053503Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:05.222657146Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/fe028ffd-71f3-43ce-aebd-be06eb4edca8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:05.222683372Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:35:07.216840 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:35:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:07.217222 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:35:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:35:11.217341 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:35:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:11.217747014Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:11.217797200Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:35:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:11.223469657Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/2f2d41f8-64a1-4870-935c-4027f87e6797 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:11.223506846Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:20.194576618Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=81d1812a-374b-46e3-9122-ed1434b3c82a name=/runtime.v1.ImageService/ImageStatus Feb 23 18:35:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:20.194770656Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=81d1812a-374b-46e3-9122-ed1434b3c82a name=/runtime.v1.ImageService/ImageStatus Feb 23 18:35:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:35:21.216549 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:35:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:21.216952 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:35:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:26.291986 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:26.292382 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:26.292595 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:26.292634 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:35:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:34.217570 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:34.217949 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:34.218197 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:34.218234 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:35:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:35:35.216982 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:35:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:35.217381 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:35:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:36.235238149Z" level=info msg="NetworkStart: stopping network for sandbox 05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:36.235369000Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7235f409-24ad-435e-b142-7069ea0f836d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:36.235399694Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:35:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:36.235407371Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:35:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:36.235414439Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:35:46.216707 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:35:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:46.217293 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:35:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:47.234504504Z" level=info msg="NetworkStart: stopping network for sandbox cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:47.234642503Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/89850953-8032-43c9-a83c-f69c9a1857ae Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:47.234683471Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:35:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:47.234694773Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:35:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:47.234704061Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:48.233975424Z" level=info msg="NetworkStart: stopping network for sandbox b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:48.234082277Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3c2a3ba7-8d93-4742-98ac-ee6ff42f569c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:48.234109852Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:48.234116673Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:48.234123962Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:50.233778006Z" level=info msg="NetworkStart: stopping network for sandbox 3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:50.233887041Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/fe028ffd-71f3-43ce-aebd-be06eb4edca8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:50.233915080Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:35:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:50.233922452Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:35:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:50.233930419Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:56.236886588Z" level=info msg="NetworkStart: stopping network for sandbox 9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:35:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:56.237019338Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/2f2d41f8-64a1-4870-935c-4027f87e6797 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:35:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:56.237061498Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:35:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:56.237073294Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:35:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:35:56.237083161Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:35:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:56.291828 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:56.292102 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:56.292381 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:35:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:35:56.292415 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:36:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:01.216681 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:36:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:01.217046 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:36:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:12.218218 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:36:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:12.218820 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.245453338Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.245499076Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 systemd[1]: run-utsns-7235f409\x2d24ad\x2d435e\x2db142\x2d7069ea0f836d.mount: Deactivated successfully. Feb 23 18:36:21 ip-10-0-136-68 systemd[1]: run-ipcns-7235f409\x2d24ad\x2d435e\x2db142\x2d7069ea0f836d.mount: Deactivated successfully. Feb 23 18:36:21 ip-10-0-136-68 systemd[1]: run-netns-7235f409\x2d24ad\x2d435e\x2db142\x2d7069ea0f836d.mount: Deactivated successfully. Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.277340967Z" level=info msg="runSandbox: deleting pod ID 05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c from idIndex" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.277374902Z" level=info msg="runSandbox: removing pod sandbox 05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.277409285Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.277434096Z" level=info msg="runSandbox: unmounting shmPath for sandbox 05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c-userdata-shm.mount: Deactivated successfully. Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.284306455Z" level=info msg="runSandbox: removing pod sandbox from storage: 05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.285811290Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:21.285838097Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=d07f43ab-47e4-474f-b4b6-f88ac33872fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:21.286030 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:36:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:21.286084 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:36:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:21.286111 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:36:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:21.286165 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(05a13332569cee49afcf7fc109aec75a545a17ed525df3e97bf5198cd804f71c): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:36:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:24.217103 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:36:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:24.217520 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:36:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:26.292149 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:36:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:26.292438 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:36:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:26.292621 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:36:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:26.292652 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.243703060Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.243748952Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 systemd[1]: run-utsns-89850953\x2d8032\x2d43c9\x2da83c\x2df69c9a1857ae.mount: Deactivated successfully. Feb 23 18:36:32 ip-10-0-136-68 systemd[1]: run-ipcns-89850953\x2d8032\x2d43c9\x2da83c\x2df69c9a1857ae.mount: Deactivated successfully. Feb 23 18:36:32 ip-10-0-136-68 systemd[1]: run-netns-89850953\x2d8032\x2d43c9\x2da83c\x2df69c9a1857ae.mount: Deactivated successfully. Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.265330371Z" level=info msg="runSandbox: deleting pod ID cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e from idIndex" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.265374500Z" level=info msg="runSandbox: removing pod sandbox cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.265404778Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.265418522Z" level=info msg="runSandbox: unmounting shmPath for sandbox cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e-userdata-shm.mount: Deactivated successfully. Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.271325241Z" level=info msg="runSandbox: removing pod sandbox from storage: cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.272866835Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:32.272897040Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7ac2cf7e-66c7-48a5-8fd8-5f836e746d68 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:32.273148 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:36:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:32.273219 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:36:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:32.273277 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:36:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:32.273364 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(cd514f4e53e2f2ab81c6bedb4e153806b36180977ca2db71e660956f46cfe48e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.244010120Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.244065359Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 systemd[1]: run-utsns-3c2a3ba7\x2d8d93\x2d4742\x2d98ac\x2dee6ff42f569c.mount: Deactivated successfully. Feb 23 18:36:33 ip-10-0-136-68 systemd[1]: run-ipcns-3c2a3ba7\x2d8d93\x2d4742\x2d98ac\x2dee6ff42f569c.mount: Deactivated successfully. Feb 23 18:36:33 ip-10-0-136-68 systemd[1]: run-netns-3c2a3ba7\x2d8d93\x2d4742\x2d98ac\x2dee6ff42f569c.mount: Deactivated successfully. Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.268331722Z" level=info msg="runSandbox: deleting pod ID b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa from idIndex" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.268378264Z" level=info msg="runSandbox: removing pod sandbox b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.268426666Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.268440833Z" level=info msg="runSandbox: unmounting shmPath for sandbox b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa-userdata-shm.mount: Deactivated successfully. Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.280316304Z" level=info msg="runSandbox: removing pod sandbox from storage: b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.281802175Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:33.281834185Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=db43ee92-dcd6-4e44-a1b1-2ef1ece0581b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:33.282062 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:36:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:33.282123 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:36:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:33.282146 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:36:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:33.282208 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b5707933b514c3d1000045e308ef55a2f74f0be9c967775a5ed67f6519410efa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:36:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:34.216452 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:36:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:34.216831092Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:34.216909374Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:36:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:34.222565950Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/66063061-5bb7-4fb5-9412-8d862e0c5818 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:36:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:34.222588589Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.244065710Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.244121437Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 systemd[1]: run-utsns-fe028ffd\x2d71f3\x2d43ce\x2daebd\x2dbe06eb4edca8.mount: Deactivated successfully. Feb 23 18:36:35 ip-10-0-136-68 systemd[1]: run-ipcns-fe028ffd\x2d71f3\x2d43ce\x2daebd\x2dbe06eb4edca8.mount: Deactivated successfully. Feb 23 18:36:35 ip-10-0-136-68 systemd[1]: run-netns-fe028ffd\x2d71f3\x2d43ce\x2daebd\x2dbe06eb4edca8.mount: Deactivated successfully. Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.281373907Z" level=info msg="runSandbox: deleting pod ID 3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45 from idIndex" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.281436798Z" level=info msg="runSandbox: removing pod sandbox 3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.281467655Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.281483138Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45-userdata-shm.mount: Deactivated successfully. Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.287315667Z" level=info msg="runSandbox: removing pod sandbox from storage: 3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.288926514Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:35.288955792Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=2e9c854d-43a8-4043-bbd5-9508e44b8d43 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:35.289182 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:36:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:35.289276 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:36:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:35.289319 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:36:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:35.289411 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3cb1feb2edd0fed955df0a2e58076c73e27aeb06c8af230f0ab24128a38e2a45): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:36:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:36.216535 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:36:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:36.216990 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.247524182Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.247579640Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 systemd[1]: run-utsns-2f2d41f8\x2d64a1\x2d4870\x2d935c\x2d4027f87e6797.mount: Deactivated successfully. Feb 23 18:36:41 ip-10-0-136-68 systemd[1]: run-ipcns-2f2d41f8\x2d64a1\x2d4870\x2d935c\x2d4027f87e6797.mount: Deactivated successfully. Feb 23 18:36:41 ip-10-0-136-68 systemd[1]: run-netns-2f2d41f8\x2d64a1\x2d4870\x2d935c\x2d4027f87e6797.mount: Deactivated successfully. Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.264323379Z" level=info msg="runSandbox: deleting pod ID 9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1 from idIndex" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.264366438Z" level=info msg="runSandbox: removing pod sandbox 9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.264406830Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.264429038Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1-userdata-shm.mount: Deactivated successfully. Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.269297380Z" level=info msg="runSandbox: removing pod sandbox from storage: 9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.270824808Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:41.270853821Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=04b3464b-8b60-4aa1-aeb3-b9b112c7f5bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:41.271066 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:36:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:41.271137 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:36:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:41.271176 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:36:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:41.271287 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9358c2868f9273c406751bcd1c7128ea1518e617ea7c0000a79f0d63e67f16e1): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:36:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:45.216373 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:36:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:45.216399 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:36:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:45.216770819Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:45.216830146Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:36:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:45.216770199Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:45.216945895Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:36:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:45.223851582Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/59829183-342d-4d74-bd70-b62acb0f74d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:36:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:45.223878337Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:36:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:45.224043884Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c682d241-cc44-44a1-84e2-b9fa128be414 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:36:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:45.224076247Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:36:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:47.217196 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:36:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:47.217644915Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:47.217888399Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:36:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:47.223196397Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/bbb0d878-0444-45b8-bbc8-54826a581737 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:36:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:47.223222749Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:36:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:48.217555 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:36:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:48.218232 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:36:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:36:56.217117 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:36:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:56.217566824Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:36:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:56.217633600Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:36:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:56.223393960Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/8c8d0718-4bdb-47fa-8ee2-9f9106668410 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:36:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:36:56.223427844Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:36:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:56.292278 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:36:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:56.292482 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:36:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:56.292698 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:36:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:36:56.292726 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:37:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:37:03.217157 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:37:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:03.217448 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:03.217757 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:03.218044 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:03.218082 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:37:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:03.218167 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:37:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:37:14.216602 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:37:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:14.217006 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:37:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:19.233604633Z" level=info msg="NetworkStart: stopping network for sandbox 21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:37:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:19.233736103Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/66063061-5bb7-4fb5-9412-8d862e0c5818 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:37:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:19.233770690Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:37:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:19.233781234Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:37:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:19.233790254Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:37:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:37:25.216476 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.217219242Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=ab8e8c6c-1c46-462e-8ea6-78b787a9075d name=/runtime.v1.ImageService/ImageStatus Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.217480079Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ab8e8c6c-1c46-462e-8ea6-78b787a9075d name=/runtime.v1.ImageService/ImageStatus Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.218046982Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=23c4790c-5d76-4b3c-9bf2-358ced8ca7a1 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.218172186Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=23c4790c-5d76-4b3c-9bf2-358ced8ca7a1 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.218814711Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=7acd3dbc-2a67-45a5-9f89-236aa743e410 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.218915491Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:37:25 ip-10-0-136-68 systemd[1]: Started crio-conmon-7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e.scope. Feb 23 18:37:25 ip-10-0-136-68 systemd[1]: Started libcontainer container 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e. Feb 23 18:37:25 ip-10-0-136-68 conmon[9989]: conmon 7954533880ae48edf98c : Failed to write to cgroup.event_control Operation not supported Feb 23 18:37:25 ip-10-0-136-68 systemd[1]: crio-conmon-7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e.scope: Deactivated successfully. Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.367659450Z" level=info msg="Created container 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=7acd3dbc-2a67-45a5-9f89-236aa743e410 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.368028388Z" level=info msg="Starting container: 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e" id=d2d6c067-6302-473a-a940-cc738c63355e name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:37:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:25.374849153Z" level=info msg="Started container" PID=10001 containerID=7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=d2d6c067-6302-473a-a940-cc738c63355e name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:37:25 ip-10-0-136-68 systemd[1]: crio-7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e.scope: Deactivated successfully. Feb 23 18:37:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:26.292459 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:26.292865 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:26.293089 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:26.293118 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:37:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:29.264955197Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=7ce8e124-7198-4aed-ab1f-ff24c191062d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:37:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:37:29.265937 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e} Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.236705380Z" level=info msg="NetworkStart: stopping network for sandbox 8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.236825094Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/59829183-342d-4d74-bd70-b62acb0f74d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.236861577Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.236871507Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.236883964Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.237292227Z" level=info msg="NetworkStart: stopping network for sandbox bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.237381621Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c682d241-cc44-44a1-84e2-b9fa128be414 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.237406362Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.237413146Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:37:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:30.237419989Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:37:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:32.234780558Z" level=info msg="NetworkStart: stopping network for sandbox acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:37:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:32.234917763Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/bbb0d878-0444-45b8-bbc8-54826a581737 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:37:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:32.234959084Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:37:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:32.234974406Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:37:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:32.234983689Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:37:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:41.234834660Z" level=info msg="NetworkStart: stopping network for sandbox 3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:37:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:41.234977285Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/8c8d0718-4bdb-47fa-8ee2-9f9106668410 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:37:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:41.235019286Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:37:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:41.235030610Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:37:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:37:41.235042666Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:37:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:37:44.872072 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:37:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:37:44.872129 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:37:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:37:54.873132 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:37:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:37:54.873199 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:37:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:56.291702 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:56.291929 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:56.292179 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:37:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:37:56.292212 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.244062902Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.244111422Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 systemd[1]: run-utsns-66063061\x2d5bb7\x2d4fb5\x2d9412\x2d8d862e0c5818.mount: Deactivated successfully. Feb 23 18:38:04 ip-10-0-136-68 systemd[1]: run-ipcns-66063061\x2d5bb7\x2d4fb5\x2d9412\x2d8d862e0c5818.mount: Deactivated successfully. Feb 23 18:38:04 ip-10-0-136-68 systemd[1]: run-netns-66063061\x2d5bb7\x2d4fb5\x2d9412\x2d8d862e0c5818.mount: Deactivated successfully. Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.265315335Z" level=info msg="runSandbox: deleting pod ID 21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8 from idIndex" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.265347244Z" level=info msg="runSandbox: removing pod sandbox 21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.265379340Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.265395701Z" level=info msg="runSandbox: unmounting shmPath for sandbox 21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8-userdata-shm.mount: Deactivated successfully. Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.271303358Z" level=info msg="runSandbox: removing pod sandbox from storage: 21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.272921390Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:04.272951020Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=0dad9533-eae5-4aa4-9d86-15caa6654d53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:04.273166 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:38:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:04.273226 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:38:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:04.273271 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:38:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:04.273331 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(21171c3b4fa99841020039a24dbff7f7c3a4d7102900ed8b8976c797c13d05f8): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:38:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:04.872054 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:38:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:04.872117 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:38:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:07.217526 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:07.217830 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:07.218108 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:07.218135 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:38:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:14.872597 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:38:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:14.872654 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:15.217162 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.217560043Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.217635664Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.223385646Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2a0f7eca-2b00-4b80-8ed4-8bb7add4260c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.223411508Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.246204396Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.246267998Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.247299871Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.247333803Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 systemd[1]: run-utsns-59829183\x2d342d\x2d4d74\x2dbd70\x2db62acb0f74d7.mount: Deactivated successfully. Feb 23 18:38:15 ip-10-0-136-68 systemd[1]: run-utsns-c682d241\x2dcc44\x2d44a1\x2d84e2\x2db9fa128be414.mount: Deactivated successfully. Feb 23 18:38:15 ip-10-0-136-68 systemd[1]: run-ipcns-59829183\x2d342d\x2d4d74\x2dbd70\x2db62acb0f74d7.mount: Deactivated successfully. Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.272329382Z" level=info msg="runSandbox: deleting pod ID bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608 from idIndex" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.272369435Z" level=info msg="runSandbox: removing pod sandbox bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.272408257Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.272429508Z" level=info msg="runSandbox: unmounting shmPath for sandbox bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.272343059Z" level=info msg="runSandbox: deleting pod ID 8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759 from idIndex" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.272485699Z" level=info msg="runSandbox: removing pod sandbox 8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.272513700Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.272530075Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.276304038Z" level=info msg="runSandbox: removing pod sandbox from storage: 8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.277307552Z" level=info msg="runSandbox: removing pod sandbox from storage: bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.277821875Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.277846450Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=ba5d9ad9-a75b-4fc2-b30e-cbdabb69dbf9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:15.278046 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:15.278201 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:15.278272 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:15.278352 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.279370904Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:15.279396404Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d71aa69b-f0e8-4b29-9993-e649fde2bd74 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:15.279549 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:15.279606 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:15.279642 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:38:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:15.279721 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:38:16 ip-10-0-136-68 systemd[1]: run-netns-c682d241\x2dcc44\x2d44a1\x2d84e2\x2db9fa128be414.mount: Deactivated successfully. Feb 23 18:38:16 ip-10-0-136-68 systemd[1]: run-ipcns-c682d241\x2dcc44\x2d44a1\x2d84e2\x2db9fa128be414.mount: Deactivated successfully. Feb 23 18:38:16 ip-10-0-136-68 systemd[1]: run-netns-59829183\x2d342d\x2d4d74\x2dbd70\x2db62acb0f74d7.mount: Deactivated successfully. Feb 23 18:38:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bc6c7b3bc49aa648361565bc4beb7f88e839b1b5da6046cedf387852ab5d6608-userdata-shm.mount: Deactivated successfully. Feb 23 18:38:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8cf0f83b1013a47adee6a045f53e345d4c2543bf15bd1fc03615d8fb5f7a6759-userdata-shm.mount: Deactivated successfully. Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.245084178Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.245137255Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 systemd[1]: run-utsns-bbb0d878\x2d0444\x2d45b8\x2dbbc8\x2d54826a581737.mount: Deactivated successfully. Feb 23 18:38:17 ip-10-0-136-68 systemd[1]: run-ipcns-bbb0d878\x2d0444\x2d45b8\x2dbbc8\x2d54826a581737.mount: Deactivated successfully. Feb 23 18:38:17 ip-10-0-136-68 systemd[1]: run-netns-bbb0d878\x2d0444\x2d45b8\x2dbbc8\x2d54826a581737.mount: Deactivated successfully. Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.274334003Z" level=info msg="runSandbox: deleting pod ID acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8 from idIndex" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.274376125Z" level=info msg="runSandbox: removing pod sandbox acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.274425082Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.274439634Z" level=info msg="runSandbox: unmounting shmPath for sandbox acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8-userdata-shm.mount: Deactivated successfully. Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.288321940Z" level=info msg="runSandbox: removing pod sandbox from storage: acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.289845359Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:17.289875894Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=9a0d0ea3-043f-4952-b196-4740af5d0b10 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:17.290094 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:38:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:17.290151 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:38:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:17.290177 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:38:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:17.290318 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(acf0bf8d9148e5d8663db5ad6bea3684780a06edc0703e716736384bc2a54fe8): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:38:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:24.872820 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:38:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:24.872887 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:38:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:24.872916 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:38:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:24.873404 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:38:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:24.873557 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e" gracePeriod=30 Feb 23 18:38:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:24.873797269Z" level=info msg="Stopping container: 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e (timeout: 30s)" id=f6d48f6a-25a9-4d4a-a7f1-9a44f3291ef6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:26.216664 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.217011388Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.217070302Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.222845238Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2a68b266-31a7-4026-a2f5-8895dd82aa79 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.222880000Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.245068602Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.245099052Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 systemd[1]: run-utsns-8c8d0718\x2d4bdb\x2d47fa\x2d8ee2\x2d9f9106668410.mount: Deactivated successfully. Feb 23 18:38:26 ip-10-0-136-68 systemd[1]: run-ipcns-8c8d0718\x2d4bdb\x2d47fa\x2d8ee2\x2d9f9106668410.mount: Deactivated successfully. Feb 23 18:38:26 ip-10-0-136-68 systemd[1]: run-netns-8c8d0718\x2d4bdb\x2d47fa\x2d8ee2\x2d9f9106668410.mount: Deactivated successfully. Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.277340422Z" level=info msg="runSandbox: deleting pod ID 3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad from idIndex" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.277369593Z" level=info msg="runSandbox: removing pod sandbox 3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.277393828Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.277412017Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.281303669Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.282650991Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:26.282677760Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=7db9e3c1-2307-4cb4-8ee8-985aae32c80f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:26.282839 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:26.282888 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:26.282912 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:26.282969 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:26.291661 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:26.291914 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:26.292144 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:26.292171 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:38:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:27.216925 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:38:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:27.217365084Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:27.217431298Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:38:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:27.222445798Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5d5e1f52-b1ff-4c7a-8749-5e4b4dd1b23f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:38:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:27.222472962Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:38:27 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3d545cceced5961ba70dd89be8f19036d7294c1b85de908000aa5f833d9432ad-userdata-shm.mount: Deactivated successfully. Feb 23 18:38:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:28.635117123Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=f6d48f6a-25a9-4d4a-a7f1-9a44f3291ef6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:38:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-181fd866e09c515c05249c8f9fb235e0a93719f23c90ac2e5ada742af5391cf5-merged.mount: Deactivated successfully. Feb 23 18:38:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:32.216592 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.216995590Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.217060542Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.222909985Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/ce04758e-6a14-42e6-b105-7c52059078c5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.222933993Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.419951109Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=f6d48f6a-25a9-4d4a-a7f1-9a44f3291ef6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.421884166Z" level=info msg="Stopped container 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=f6d48f6a-25a9-4d4a-a7f1-9a44f3291ef6 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.422523530Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=8fea61ba-c32c-47f5-9b60-571144dd0834 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.422708392Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=8fea61ba-c32c-47f5-9b60-571144dd0834 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.423319082Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=aadb2381-4754-4dc9-85ec-43096753c492 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.423473227Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=aadb2381-4754-4dc9-85ec-43096753c492 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.424100172Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=372c10cf-a0b0-43d8-903c-5c6414fea125 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.424210265Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:38:32 ip-10-0-136-68 systemd[1]: Started crio-conmon-5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe.scope. Feb 23 18:38:32 ip-10-0-136-68 systemd[1]: Started libcontainer container 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe. Feb 23 18:38:32 ip-10-0-136-68 conmon[10164]: conmon 5977cd64f014617e9eba : Failed to write to cgroup.event_control Operation not supported Feb 23 18:38:32 ip-10-0-136-68 systemd[1]: crio-conmon-5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe.scope: Deactivated successfully. Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.546939821Z" level=info msg="Created container 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=372c10cf-a0b0-43d8-903c-5c6414fea125 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.547589002Z" level=info msg="Starting container: 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" id=1a0a5760-b1ee-4ec7-b5e9-9fb4315264e3 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:32.554771072Z" level=info msg="Started container" PID=10176 containerID=5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=1a0a5760-b1ee-4ec7-b5e9-9fb4315264e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:38:32 ip-10-0-136-68 systemd[1]: crio-5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe.scope: Deactivated successfully. Feb 23 18:38:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:33.117015256Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=37fc9b03-e8c2-40ea-a469-a926885ec1a1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:38:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:36.867111610Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=7bb7234d-b134-4c1c-a3e9-9ddca4398b5a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:38:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:36.868072 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e" exitCode=-1 Feb 23 18:38:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:36.868121 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e} Feb 23 18:38:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:36.868168 2199 scope.go:115] "RemoveContainer" containerID="4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" Feb 23 18:38:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:39.216804 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:38:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:39.217209062Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:38:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:39.217300252Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:38:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:39.222608574Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/52c325e8-0479-43bf-bdef-cd05dc2d05ab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:38:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:39.222643149Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:38:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:40.629034609Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=5275baae-ce25-45b6-b926-75d38b6e96eb name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:38:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:41.634357411Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=12410756-0771-4816-aef9-c88252899b55 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:38:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:44.378141668Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=50f11639-baba-4fff-94c6-32549e847c7f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:38:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:44.378629386Z" level=info msg="Removing container: 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb" id=00d6b1f8-cd41-4b86-855d-4d03df15399a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:38:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:45.373273863Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=9783970c-3bbb-4fe1-98ff-cd977b5686e5 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:38:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:45.374148 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe} Feb 23 18:38:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:48.127300567Z" level=warning msg="Failed to find container exit file for 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: timed out waiting for the condition" id=00d6b1f8-cd41-4b86-855d-4d03df15399a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:38:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:48.151689622Z" level=info msg="Removed container 4ee59bdc119c54bc103dc357cd329f52672ee2104c1542ccc63e94991eb93bcb: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=00d6b1f8-cd41-4b86-855d-4d03df15399a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:38:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:38:52.128969000Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=07bc6a27-600a-43e3-b023-26ca069f48e1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:38:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:54.872757 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:38:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:38:54.872817 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:38:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:56.292616 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:56.292819 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:56.293044 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:38:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:38:56.293068 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:00.235513927Z" level=info msg="NetworkStart: stopping network for sandbox a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:00.235614625Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2a0f7eca-2b00-4b80-8ed4-8bb7add4260c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:00.235646734Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:00.235654344Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:00.235660601Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:39:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:04.872281 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:39:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:04.872340 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:39:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:11.234887653Z" level=info msg="NetworkStart: stopping network for sandbox a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:11.234995770Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2a68b266-31a7-4026-a2f5-8895dd82aa79 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:39:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:11.235024189Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:39:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:11.235030858Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:39:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:11.235040113Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:39:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:12.233931944Z" level=info msg="NetworkStart: stopping network for sandbox aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:12.234038110Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5d5e1f52-b1ff-4c7a-8749-5e4b4dd1b23f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:39:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:12.234064961Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:39:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:12.234071870Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:39:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:12.234077917Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:39:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:14.872871 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:39:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:14.872927 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:39:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:16.217091 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:16.217956 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:16.218221 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:16.218391 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:17.236434825Z" level=info msg="NetworkStart: stopping network for sandbox de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:17.236571172Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/ce04758e-6a14-42e6-b105-7c52059078c5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:17.236611034Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:17.236623233Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:17.236633498Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:39:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:24.234993932Z" level=info msg="NetworkStart: stopping network for sandbox fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:24.235106017Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/52c325e8-0479-43bf-bdef-cd05dc2d05ab Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:39:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:24.235134120Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:39:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:24.235141023Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:39:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:24.235147961Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:39:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:24.872361 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:39:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:24.872417 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:39:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:26.291860 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:26.292124 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:26.292379 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:26.292407 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:39:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:34.872533 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:39:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:34.872604 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:39:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:34.872635 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:39:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:34.873155 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:39:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:34.873349 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" gracePeriod=30 Feb 23 18:39:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:34.873626564Z" level=info msg="Stopping container: 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe (timeout: 30s)" id=b008a6f4-6490-4a25-8260-d21c6a98fabc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:39:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:38.635138526Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=b008a6f4-6490-4a25-8260-d21c6a98fabc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:39:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-75cfd22b63985df9b1595824cf116a1eeaa506e7cd20d465c920100f9f7787e7-merged.mount: Deactivated successfully. Feb 23 18:39:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:42.421947584Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=b008a6f4-6490-4a25-8260-d21c6a98fabc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:39:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:42.423476998Z" level=info msg="Stopped container 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b008a6f4-6490-4a25-8260-d21c6a98fabc name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:39:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:42.423974 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:39:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:42.943431919Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=764d777c-23f7-4b4a-93c0-1ba437ce7968 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.245878513Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.245932759Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 systemd[1]: run-utsns-2a0f7eca\x2d2b00\x2d4b80\x2d8ed4\x2d8bb7add4260c.mount: Deactivated successfully. Feb 23 18:39:45 ip-10-0-136-68 systemd[1]: run-ipcns-2a0f7eca\x2d2b00\x2d4b80\x2d8ed4\x2d8bb7add4260c.mount: Deactivated successfully. Feb 23 18:39:45 ip-10-0-136-68 systemd[1]: run-netns-2a0f7eca\x2d2b00\x2d4b80\x2d8ed4\x2d8bb7add4260c.mount: Deactivated successfully. Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.267331546Z" level=info msg="runSandbox: deleting pod ID a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697 from idIndex" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.267367113Z" level=info msg="runSandbox: removing pod sandbox a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.267397450Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.267425709Z" level=info msg="runSandbox: unmounting shmPath for sandbox a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697-userdata-shm.mount: Deactivated successfully. Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.272296317Z" level=info msg="runSandbox: removing pod sandbox from storage: a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.273834309Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:45.273861671Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1adc5e5b-86bc-4fac-b5d6-99d37bbc15a0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:45.274048 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:39:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:45.274215 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:39:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:45.274236 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:39:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:45.274329 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a1fb810bca018d148491f668f1522c3a63a929c6fbb5d65f6c15d53cc7b2e697): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:39:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:46.692908633Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=9d7bf891-888b-44e5-a637-d3299b082db1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:39:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:46.693843 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" exitCode=-1 Feb 23 18:39:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:46.693882 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe} Feb 23 18:39:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:46.693915 2199 scope.go:115] "RemoveContainer" containerID="7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e" Feb 23 18:39:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:47.695368 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:39:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:47.695753 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:39:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:50.454048042Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=14033338-ee7e-4755-9d13-dc3c80c5ce4b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:39:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:54.203896129Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=f01754da-f65e-474e-ac4d-774b747ab580 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:39:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:54.204504122Z" level=info msg="Removing container: 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e" id=e3c2c05b-1c0a-4d7d-80ce-01063b502f70 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.243791285Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.243840536Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 systemd[1]: run-utsns-2a68b266\x2d31a7\x2d4026\x2da2f5\x2d8895dd82aa79.mount: Deactivated successfully. Feb 23 18:39:56 ip-10-0-136-68 systemd[1]: run-ipcns-2a68b266\x2d31a7\x2d4026\x2da2f5\x2d8895dd82aa79.mount: Deactivated successfully. Feb 23 18:39:56 ip-10-0-136-68 systemd[1]: run-netns-2a68b266\x2d31a7\x2d4026\x2da2f5\x2d8895dd82aa79.mount: Deactivated successfully. Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.268319610Z" level=info msg="runSandbox: deleting pod ID a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3 from idIndex" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.268357236Z" level=info msg="runSandbox: removing pod sandbox a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.268385986Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.268398353Z" level=info msg="runSandbox: unmounting shmPath for sandbox a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3-userdata-shm.mount: Deactivated successfully. Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.274316221Z" level=info msg="runSandbox: removing pod sandbox from storage: a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.275898705Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:56.275933359Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0be6d866-fef6-4584-8e58-fad82807bf18 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:56.276161 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:56.276230 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:56.276290 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:56.276375 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a403cd17e089fe6de0c07946010860b8d241db1095e5908c4f525e8679a62eb3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:56.292535 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:56.292829 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:56.293098 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:56.293128 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.244018690Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.244077184Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 systemd[1]: run-utsns-5d5e1f52\x2db1ff\x2d4c7a\x2d8749\x2d5e4b4dd1b23f.mount: Deactivated successfully. Feb 23 18:39:57 ip-10-0-136-68 systemd[1]: run-ipcns-5d5e1f52\x2db1ff\x2d4c7a\x2d8749\x2d5e4b4dd1b23f.mount: Deactivated successfully. Feb 23 18:39:57 ip-10-0-136-68 systemd[1]: run-netns-5d5e1f52\x2db1ff\x2d4c7a\x2d8749\x2d5e4b4dd1b23f.mount: Deactivated successfully. Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.265318090Z" level=info msg="runSandbox: deleting pod ID aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646 from idIndex" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.265362296Z" level=info msg="runSandbox: removing pod sandbox aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.265401684Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.265424364Z" level=info msg="runSandbox: unmounting shmPath for sandbox aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646-userdata-shm.mount: Deactivated successfully. Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.271317606Z" level=info msg="runSandbox: removing pod sandbox from storage: aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.272866187Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.272898570Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a68cb394-a200-4480-9674-f7734972de9c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:57.273134 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:39:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:57.273191 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:39:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:57.273215 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:39:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:39:57.273356 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(aa35e41eb2133e797b8dd026304f7e2f5edba84644f35836c2d631d005bd1646): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.966009882Z" level=warning msg="Failed to find container exit file for 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: timed out waiting for the condition" id=e3c2c05b-1c0a-4d7d-80ce-01063b502f70 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:39:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:57.990581317Z" level=info msg="Removed container 7954533880ae48edf98c84549b1378a8d2d162cc1235b2df68a262917201e33e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=e3c2c05b-1c0a-4d7d-80ce-01063b502f70 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:39:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:39:59.216878 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:39:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:59.217321500Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:39:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:59.217375916Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:39:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:59.225079543Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/34cfc729-e6b1-42d6-bb1a-278678749c14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:39:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:39:59.225113877Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.245961591Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.246010478Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 systemd[1]: run-utsns-ce04758e\x2d6a14\x2d42e6\x2db105\x2d7c52059078c5.mount: Deactivated successfully. Feb 23 18:40:02 ip-10-0-136-68 systemd[1]: run-ipcns-ce04758e\x2d6a14\x2d42e6\x2db105\x2d7c52059078c5.mount: Deactivated successfully. Feb 23 18:40:02 ip-10-0-136-68 systemd[1]: run-netns-ce04758e\x2d6a14\x2d42e6\x2db105\x2d7c52059078c5.mount: Deactivated successfully. Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.272336970Z" level=info msg="runSandbox: deleting pod ID de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec from idIndex" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.272374485Z" level=info msg="runSandbox: removing pod sandbox de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.272414080Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.272430083Z" level=info msg="runSandbox: unmounting shmPath for sandbox de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec-userdata-shm.mount: Deactivated successfully. Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.278307155Z" level=info msg="runSandbox: removing pod sandbox from storage: de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.279773898Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.279802691Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=a6f323e9-68bd-4b23-ac4f-31a330b63a05 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:02.279966 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:02.280020 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:02.280044 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:02.280107 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(de7e55945942e6148ae24f167a52c8e28ffe6f000f654279e1be73cec50be9ec): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:02.463915240Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=11a4afbb-1a82-443f-ba2b-b9adfc269aec name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:40:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:03.216842 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:40:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:03.217351 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.245286770Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.245331458Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 systemd[1]: run-utsns-52c325e8\x2d0479\x2d43bf\x2dbdef\x2dcd05dc2d05ab.mount: Deactivated successfully. Feb 23 18:40:09 ip-10-0-136-68 systemd[1]: run-ipcns-52c325e8\x2d0479\x2d43bf\x2dbdef\x2dcd05dc2d05ab.mount: Deactivated successfully. Feb 23 18:40:09 ip-10-0-136-68 systemd[1]: run-netns-52c325e8\x2d0479\x2d43bf\x2dbdef\x2dcd05dc2d05ab.mount: Deactivated successfully. Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.269321644Z" level=info msg="runSandbox: deleting pod ID fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985 from idIndex" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.269355657Z" level=info msg="runSandbox: removing pod sandbox fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.269388318Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.269415738Z" level=info msg="runSandbox: unmounting shmPath for sandbox fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985-userdata-shm.mount: Deactivated successfully. Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.275300506Z" level=info msg="runSandbox: removing pod sandbox from storage: fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.276815079Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:09.276843455Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=8a514375-dfa1-4d39-824b-bc87e06e6d4f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:09.277063 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:40:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:09.277129 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:40:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:09.277153 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:40:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:09.277219 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(fba524c47b0fa0d52c9a1456f6e55483e9e35d8fbbaf28ff62092b3391acc985): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:40:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:10.216667 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:40:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:10.216817 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:40:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:10.217118061Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:10.217186195Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:40:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:10.217197380Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:10.217268713Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:40:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:10.224889716Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/fc22a3e3-82b6-43e8-8a0e-4319109493cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:40:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:10.224925807Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:40:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:10.225309007Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/fc7e1ee1-f77f-42a1-8952-1234cf018b95 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:40:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:10.225402259Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:40:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:14.216879 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:40:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:14.217444800Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:14.217518600Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:40:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:14.222973756Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/1d45d2c4-dbcc-43d3-aa3c-1cb1288683cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:40:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:14.222996656Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:40:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:16.217656 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:40:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:16.218289 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:40:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:20.197818112Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=70b944c6-f738-4638-a684-3b5738637bd2 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:40:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:20.198006745Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=70b944c6-f738-4638-a684-3b5738637bd2 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:40:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:21.216795 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:40:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:21.217288436Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:21.217356420Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:40:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:21.222576130Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/cf1a7a1d-9a21-44a1-8051-d2ba5d44f3ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:40:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:21.222610057Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:40:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:22.217131 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:22.217477 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:22.217819 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:22.217863 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:40:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:26.292660 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:26.292945 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:26.293155 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:26.293210 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:40:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:28.216921 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:40:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:28.217526 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:40:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:42.216578 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:40:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:42.216986 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:40:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:44.237112352Z" level=info msg="NetworkStart: stopping network for sandbox be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:44.237264495Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/34cfc729-e6b1-42d6-bb1a-278678749c14 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:40:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:44.237308971Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:40:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:44.237322978Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:40:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:44.237332893Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:40:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:40:54.216790 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:40:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:54.217430 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.237856670Z" level=info msg="NetworkStart: stopping network for sandbox dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238028398Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/fc22a3e3-82b6-43e8-8a0e-4319109493cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238074216Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238087615Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238098452Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238148729Z" level=info msg="NetworkStart: stopping network for sandbox 7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238225877Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/fc7e1ee1-f77f-42a1-8952-1234cf018b95 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238280355Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238292795Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:40:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:55.238302279Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:56.291977 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:56.292214 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:56.292454 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:40:56.292492 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:40:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:59.234112128Z" level=info msg="NetworkStart: stopping network for sandbox aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:40:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:59.234233123Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/1d45d2c4-dbcc-43d3-aa3c-1cb1288683cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:40:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:59.234288387Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:40:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:59.234297156Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:40:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:40:59.234304370Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:06.234307488Z" level=info msg="NetworkStart: stopping network for sandbox bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:06.234406794Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/cf1a7a1d-9a21-44a1-8051-d2ba5d44f3ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:06.234435797Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:06.234445635Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:06.234452693Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:41:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:41:09.217402 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:41:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:09.217796 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:41:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:41:22.217072 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:41:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:22.217662 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:41:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:26.292148 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:26.292409 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:26.292637 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:26.292663 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.246524680Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.246580171Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 systemd[1]: run-utsns-34cfc729\x2de6b1\x2d42d6\x2dbb1a\x2d278678749c14.mount: Deactivated successfully. Feb 23 18:41:29 ip-10-0-136-68 systemd[1]: run-ipcns-34cfc729\x2de6b1\x2d42d6\x2dbb1a\x2d278678749c14.mount: Deactivated successfully. Feb 23 18:41:29 ip-10-0-136-68 systemd[1]: run-netns-34cfc729\x2de6b1\x2d42d6\x2dbb1a\x2d278678749c14.mount: Deactivated successfully. Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.272334422Z" level=info msg="runSandbox: deleting pod ID be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e from idIndex" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.272376468Z" level=info msg="runSandbox: removing pod sandbox be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.272421274Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.272437671Z" level=info msg="runSandbox: unmounting shmPath for sandbox be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e-userdata-shm.mount: Deactivated successfully. Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.278309339Z" level=info msg="runSandbox: removing pod sandbox from storage: be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.280093578Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:29.280124918Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=fd8b3749-c955-404f-9e93-3382c42fdfa7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:29.280413 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:41:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:29.280484 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:41:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:29.280541 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:41:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:29.280613 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be68e116476a44e85e3cb09ee3f686cf07ac3977f4bc7ba11885b5188164407e): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:41:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:33.217233 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:33.218031 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:33.218321 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:33.218358 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:41:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:41:37.216738 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:41:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:37.217125 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:41:40.219327 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.220549916Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.220628356Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.227859759Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/b07ddec5-fcb8-4f0f-9894-3d59cdd2a131 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.227892774Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.249411200Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.249455144Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.249563959Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.249605549Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 systemd[1]: run-utsns-fc22a3e3\x2d82b6\x2d43e8\x2d8a0e\x2d4319109493cc.mount: Deactivated successfully. Feb 23 18:41:40 ip-10-0-136-68 systemd[1]: run-utsns-fc7e1ee1\x2df77f\x2d42a1\x2d8952\x2d1234cf018b95.mount: Deactivated successfully. Feb 23 18:41:40 ip-10-0-136-68 systemd[1]: run-ipcns-fc22a3e3\x2d82b6\x2d43e8\x2d8a0e\x2d4319109493cc.mount: Deactivated successfully. Feb 23 18:41:40 ip-10-0-136-68 systemd[1]: run-ipcns-fc7e1ee1\x2df77f\x2d42a1\x2d8952\x2d1234cf018b95.mount: Deactivated successfully. Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.265319459Z" level=info msg="runSandbox: deleting pod ID dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955 from idIndex" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.265348576Z" level=info msg="runSandbox: removing pod sandbox dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.265371854Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.265385139Z" level=info msg="runSandbox: unmounting shmPath for sandbox dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.268306792Z" level=info msg="runSandbox: deleting pod ID 7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea from idIndex" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.268338886Z" level=info msg="runSandbox: removing pod sandbox 7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.268371439Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.268395539Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.272304078Z" level=info msg="runSandbox: removing pod sandbox from storage: dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.273726060Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.273753690Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=b1cd6692-3574-4e6d-8b54-400db793e526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:40.273950 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:40.274001 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:40.274030 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:40.274084 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.275328030Z" level=info msg="runSandbox: removing pod sandbox from storage: 7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.276728030Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:40.276754391Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=78a7a69c-cf8f-4e78-a021-7bba27d3f41f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:40.276916 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:40.276970 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:40.276994 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:41:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:40.277048 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:41:41 ip-10-0-136-68 systemd[1]: run-netns-fc22a3e3\x2d82b6\x2d43e8\x2d8a0e\x2d4319109493cc.mount: Deactivated successfully. Feb 23 18:41:41 ip-10-0-136-68 systemd[1]: run-netns-fc7e1ee1\x2df77f\x2d42a1\x2d8952\x2d1234cf018b95.mount: Deactivated successfully. Feb 23 18:41:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dbe2ee427e96b5fa777da4c8fd8b5acea77940df8682363c2eb7438820de5955-userdata-shm.mount: Deactivated successfully. Feb 23 18:41:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7eb44b40fda20b1507f1e51f7e728d77352519e805e37221403e5b4a635da4ea-userdata-shm.mount: Deactivated successfully. Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.243311505Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.243359415Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 systemd[1]: run-utsns-1d45d2c4\x2ddbcc\x2d43d3\x2daa3c\x2d1cb1288683cc.mount: Deactivated successfully. Feb 23 18:41:44 ip-10-0-136-68 systemd[1]: run-ipcns-1d45d2c4\x2ddbcc\x2d43d3\x2daa3c\x2d1cb1288683cc.mount: Deactivated successfully. Feb 23 18:41:44 ip-10-0-136-68 systemd[1]: run-netns-1d45d2c4\x2ddbcc\x2d43d3\x2daa3c\x2d1cb1288683cc.mount: Deactivated successfully. Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.280321844Z" level=info msg="runSandbox: deleting pod ID aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395 from idIndex" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.280356594Z" level=info msg="runSandbox: removing pod sandbox aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.280390518Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.280418188Z" level=info msg="runSandbox: unmounting shmPath for sandbox aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395-userdata-shm.mount: Deactivated successfully. Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.286305151Z" level=info msg="runSandbox: removing pod sandbox from storage: aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.287808655Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:44.287837424Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=0988b8bc-6143-4672-b869-a487efb89e24 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:44.287990 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:41:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:44.288043 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:41:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:44.288065 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:41:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:44.288126 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aab9ac16aa8bd170cb17dbee0e33d70809e4e76c6564ec02431d51bf238c6395): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:41:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:41:49.216598 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:41:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:49.217138 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.243365008Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.243410657Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 systemd[1]: run-utsns-cf1a7a1d\x2d9a21\x2d44a1\x2d8051\x2dd2ba5d44f3ce.mount: Deactivated successfully. Feb 23 18:41:51 ip-10-0-136-68 systemd[1]: run-ipcns-cf1a7a1d\x2d9a21\x2d44a1\x2d8051\x2dd2ba5d44f3ce.mount: Deactivated successfully. Feb 23 18:41:51 ip-10-0-136-68 systemd[1]: run-netns-cf1a7a1d\x2d9a21\x2d44a1\x2d8051\x2dd2ba5d44f3ce.mount: Deactivated successfully. Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.273344874Z" level=info msg="runSandbox: deleting pod ID bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04 from idIndex" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.273388868Z" level=info msg="runSandbox: removing pod sandbox bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.273434672Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.273450632Z" level=info msg="runSandbox: unmounting shmPath for sandbox bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04-userdata-shm.mount: Deactivated successfully. Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.281309584Z" level=info msg="runSandbox: removing pod sandbox from storage: bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.282745738Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:51.282776544Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f3ebf673-f800-45f3-afc0-5f9065e0555b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:51.282966 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:41:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:51.283018 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:41:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:51.283039 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:41:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:51.283102 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bd1a50c92cfa26806c45e0776d356d11300f2400d15438600ded0a6a3d035e04): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:41:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:41:53.216864 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:41:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:53.217287511Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:53.217353745Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:41:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:53.222862460Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c0c083ea-d3d8-4d97-abee-dc965371b1a8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:41:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:53.222903153Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:41:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:41:54.216973 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:41:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:54.217448396Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:54.217529023Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:41:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:54.222906542Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/f4574d34-78b4-4226-8b5c-2fbd95de9da2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:41:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:54.222932375Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:41:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:41:56.216842 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:41:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:56.217193849Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:41:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:56.217288043Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:41:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:56.222812279Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/8a6d7079-7ea1-42d0-a070-186ceed4579a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:41:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:41:56.222848493Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:41:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:56.292382 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:56.292654 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:56.292894 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:41:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:41:56.292921 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:42:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:42:03.216702 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:42:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:03.217083 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:42:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:42:06.217063 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:42:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:06.217435397Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:42:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:06.217503128Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:42:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:06.223304121Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/d3f60816-6353-4c24-8898-891d708432ee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:42:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:06.223331333Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:42:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:42:15.216793 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:42:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:15.217194 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:42:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:25.239261881Z" level=info msg="NetworkStart: stopping network for sandbox 3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:42:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:25.239379634Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/b07ddec5-fcb8-4f0f-9894-3d59cdd2a131 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:42:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:25.239406809Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:42:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:25.239416326Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:42:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:25.239423064Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:42:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:26.292019 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:26.292285 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:26.292491 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:26.292538 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:42:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:42:30.217378 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:42:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:30.217951 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:42:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:37.217329 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:37.217920 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:37.218186 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:37.218216 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:42:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:38.234413794Z" level=info msg="NetworkStart: stopping network for sandbox 89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:42:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:38.234550930Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c0c083ea-d3d8-4d97-abee-dc965371b1a8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:42:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:38.234591787Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:42:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:38.234602354Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:42:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:38.234612363Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:42:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:39.236425309Z" level=info msg="NetworkStart: stopping network for sandbox dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:42:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:39.236568905Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/f4574d34-78b4-4226-8b5c-2fbd95de9da2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:42:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:39.236609595Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:42:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:39.236620617Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:42:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:39.236630834Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:42:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:41.234282161Z" level=info msg="NetworkStart: stopping network for sandbox 7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:42:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:41.234421705Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/8a6d7079-7ea1-42d0-a070-186ceed4579a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:42:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:41.234460593Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:42:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:41.234471940Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:42:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:41.234485490Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:42:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:42:43.217074 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:42:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:43.217643 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:42:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:51.234831989Z" level=info msg="NetworkStart: stopping network for sandbox e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:42:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:51.234949747Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/d3f60816-6353-4c24-8898-891d708432ee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:42:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:51.234978408Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:42:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:51.234985205Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:42:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:42:51.234991798Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:42:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:56.292493 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:56.292688 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:56.292871 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:42:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:56.292894 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:42:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:42:57.217307 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:42:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:42:57.217730 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:43:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:08.217062 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:43:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:08.217528 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.248162822Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.248210329Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 systemd[1]: run-utsns-b07ddec5\x2dfcb8\x2d4f0f\x2d9894\x2d3d59cdd2a131.mount: Deactivated successfully. Feb 23 18:43:10 ip-10-0-136-68 systemd[1]: run-ipcns-b07ddec5\x2dfcb8\x2d4f0f\x2d9894\x2d3d59cdd2a131.mount: Deactivated successfully. Feb 23 18:43:10 ip-10-0-136-68 systemd[1]: run-netns-b07ddec5\x2dfcb8\x2d4f0f\x2d9894\x2d3d59cdd2a131.mount: Deactivated successfully. Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.275340455Z" level=info msg="runSandbox: deleting pod ID 3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994 from idIndex" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.275375313Z" level=info msg="runSandbox: removing pod sandbox 3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.275410872Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.275434349Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994-userdata-shm.mount: Deactivated successfully. Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.281307542Z" level=info msg="runSandbox: removing pod sandbox from storage: 3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.282858119Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:10.282886203Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=7672b06d-40ff-4d4f-b856-12d427dee68a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:10.283067 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:43:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:10.283117 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:43:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:10.283140 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:43:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:10.283188 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3a2f63d422843c71945eaab9abc67199c2fbf6d02244357502e4be8e90681994): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:43:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:19.216944 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:43:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:19.217511 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.244375595Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.244428900Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 systemd[1]: run-utsns-c0c083ea\x2dd3d8\x2d4d97\x2dabee\x2ddc965371b1a8.mount: Deactivated successfully. Feb 23 18:43:23 ip-10-0-136-68 systemd[1]: run-ipcns-c0c083ea\x2dd3d8\x2d4d97\x2dabee\x2ddc965371b1a8.mount: Deactivated successfully. Feb 23 18:43:23 ip-10-0-136-68 systemd[1]: run-netns-c0c083ea\x2dd3d8\x2d4d97\x2dabee\x2ddc965371b1a8.mount: Deactivated successfully. Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.283351889Z" level=info msg="runSandbox: deleting pod ID 89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487 from idIndex" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.283425001Z" level=info msg="runSandbox: removing pod sandbox 89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.283463181Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.283481852Z" level=info msg="runSandbox: unmounting shmPath for sandbox 89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487-userdata-shm.mount: Deactivated successfully. Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.289322333Z" level=info msg="runSandbox: removing pod sandbox from storage: 89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.290870676Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:23.290902613Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=f4f0857e-71c4-418c-bf96-dd5fa8cad7bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:23.291145 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:43:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:23.291219 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:43:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:23.291283 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:43:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:23.291369 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(89cceb8e285b55f5c35647ab36db9a3e1976a7023b7893619d8a4e972f3cb487): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:43:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:24.217465 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.217880113Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.217941478Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.223627086Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2c9976d1-e87c-42a3-85e3-67dde86f8b81 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.223663263Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.247005087Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.247060596Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 systemd[1]: run-utsns-f4574d34\x2d78b4\x2d4226\x2d8b5c\x2d2fbd95de9da2.mount: Deactivated successfully. Feb 23 18:43:24 ip-10-0-136-68 systemd[1]: run-ipcns-f4574d34\x2d78b4\x2d4226\x2d8b5c\x2d2fbd95de9da2.mount: Deactivated successfully. Feb 23 18:43:24 ip-10-0-136-68 systemd[1]: run-netns-f4574d34\x2d78b4\x2d4226\x2d8b5c\x2d2fbd95de9da2.mount: Deactivated successfully. Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.283344064Z" level=info msg="runSandbox: deleting pod ID dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a from idIndex" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.283387622Z" level=info msg="runSandbox: removing pod sandbox dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.283434284Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.283451313Z" level=info msg="runSandbox: unmounting shmPath for sandbox dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a-userdata-shm.mount: Deactivated successfully. Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.292309501Z" level=info msg="runSandbox: removing pod sandbox from storage: dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.293872048Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:24.293901772Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0e20b408-3c03-4b16-a852-1d0159350b38 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:24.294115 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:43:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:24.294181 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:43:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:24.294205 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:43:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:24.294286 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dd237735817ee1be92bff8e2e16ceace8b5fa67262aa49887a9457821cc4cb0a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.244103173Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.244148510Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 systemd[1]: run-utsns-8a6d7079\x2d7ea1\x2d42d0\x2da070\x2d186ceed4579a.mount: Deactivated successfully. Feb 23 18:43:26 ip-10-0-136-68 systemd[1]: run-ipcns-8a6d7079\x2d7ea1\x2d42d0\x2da070\x2d186ceed4579a.mount: Deactivated successfully. Feb 23 18:43:26 ip-10-0-136-68 systemd[1]: run-netns-8a6d7079\x2d7ea1\x2d42d0\x2da070\x2d186ceed4579a.mount: Deactivated successfully. Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.262341528Z" level=info msg="runSandbox: deleting pod ID 7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e from idIndex" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.262379837Z" level=info msg="runSandbox: removing pod sandbox 7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.262413827Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.262437820Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e-userdata-shm.mount: Deactivated successfully. Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.267308879Z" level=info msg="runSandbox: removing pod sandbox from storage: 7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.268771994Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:26.268801249Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=ce32a2cb-9bed-40bf-bbca-59176d82a778 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:26.269005 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:26.269078 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:26.269120 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:26.269221 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b10f315a97f3ba00e45be0f835b2ed25d2b0639a07180ca3b2d343e009e384e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:26.292542 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:26.292768 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:26.292987 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:26.293026 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:43:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:34.217467 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:43:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:34.217884 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:43:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:35.216565 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:43:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:35.216605 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:43:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:35.216955806Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:35.217025682Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:43:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:35.216959072Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:35.217156203Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:43:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:35.224404886Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/13df7a68-2597-47b0-8fad-d266c9a7aa04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:43:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:35.224438482Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:43:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:35.224521250Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/77b55f5f-c5ee-4622-9c6d-30f964e90bce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:43:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:35.224540743Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.243780081Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.243823350Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 systemd[1]: run-utsns-d3f60816\x2d6353\x2d4c24\x2d8898\x2d891d708432ee.mount: Deactivated successfully. Feb 23 18:43:36 ip-10-0-136-68 systemd[1]: run-ipcns-d3f60816\x2d6353\x2d4c24\x2d8898\x2d891d708432ee.mount: Deactivated successfully. Feb 23 18:43:36 ip-10-0-136-68 systemd[1]: run-netns-d3f60816\x2d6353\x2d4c24\x2d8898\x2d891d708432ee.mount: Deactivated successfully. Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.267321963Z" level=info msg="runSandbox: deleting pod ID e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114 from idIndex" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.267353104Z" level=info msg="runSandbox: removing pod sandbox e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.267378025Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.267389050Z" level=info msg="runSandbox: unmounting shmPath for sandbox e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114-userdata-shm.mount: Deactivated successfully. Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.273337447Z" level=info msg="runSandbox: removing pod sandbox from storage: e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.274944054Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:36.274977921Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=5669d9fc-5c78-45dc-bc0e-c992574d9b6f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:36.275175 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:43:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:36.275235 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:43:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:36.275295 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:43:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:36.275385 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e89dc168c4688772da3d37e6f4e14cc8b6a2c03cde6d6d0f793c154437255114): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:43:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:39.217366 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:43:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:39.217741010Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:39.217800624Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:43:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:39.223075184Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2622e4a1-0748-40b0-8b36-3e981197ee93 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:43:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:39.223098983Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:43:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:49.217034 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:43:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:49.221602 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:43:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:43:50.217155 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:43:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:50.217588 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:50.218049534Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:43:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:50.218118476Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:43:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:50.218534 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:50.218899 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:50.218941 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:43:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:50.224004892Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/1cf17b2e-1ace-4796-971c-53704cca6464 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:43:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:43:50.224038176Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:43:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:56.292473 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:56.292757 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:56.292987 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:43:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:43:56.293016 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:44:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:44:00.217559 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:44:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:00.217952 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:44:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:09.234896072Z" level=info msg="NetworkStart: stopping network for sandbox b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:09.235024766Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2c9976d1-e87c-42a3-85e3-67dde86f8b81 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:44:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:09.235058638Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:44:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:09.235069871Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:44:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:09.235081053Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:44:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:44:15.216865 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:44:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:15.217278 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238598378Z" level=info msg="NetworkStart: stopping network for sandbox 0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238703258Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/77b55f5f-c5ee-4622-9c6d-30f964e90bce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238732535Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238739637Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238747998Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238849598Z" level=info msg="NetworkStart: stopping network for sandbox 84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238914150Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/13df7a68-2597-47b0-8fad-d266c9a7aa04 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238941720Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238952022Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:44:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:20.238961269Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:44:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:24.234697900Z" level=info msg="NetworkStart: stopping network for sandbox 47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:24.234801644Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2622e4a1-0748-40b0-8b36-3e981197ee93 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:44:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:24.234833617Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:44:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:24.234841147Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:44:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:24.234847700Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:44:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:26.292603 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:44:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:26.292832 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:44:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:26.293052 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:44:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:26.293094 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:44:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:44:28.216564 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:44:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:28.217148 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:44:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:35.237865791Z" level=info msg="NetworkStart: stopping network for sandbox d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:35.237995917Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/1cf17b2e-1ace-4796-971c-53704cca6464 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:44:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:35.238023958Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:44:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:35.238031098Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:44:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:35.238039504Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:44:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:44:40.217526 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.218383123Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=c6488c1b-f227-41a5-877b-1e5b6d15212e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.218660467Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c6488c1b-f227-41a5-877b-1e5b6d15212e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.219320006Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=b933bf37-b18b-4670-8fff-16fab9ee1df6 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.219511971Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=b933bf37-b18b-4670-8fff-16fab9ee1df6 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.220155997Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=af34a517-9849-4d0b-b295-b90c8606d386 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.220282242Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:44:40 ip-10-0-136-68 systemd[1]: Started crio-conmon-81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9.scope. Feb 23 18:44:40 ip-10-0-136-68 systemd[1]: Started libcontainer container 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9. Feb 23 18:44:40 ip-10-0-136-68 conmon[10865]: conmon 81c200251645cf6845a5 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:44:40 ip-10-0-136-68 systemd[1]: crio-conmon-81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9.scope: Deactivated successfully. Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.363993200Z" level=info msg="Created container 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=af34a517-9849-4d0b-b295-b90c8606d386 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.364696333Z" level=info msg="Starting container: 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9" id=2eb20ca2-7784-4fc0-925b-110bb076c447 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:44:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:40.372197940Z" level=info msg="Started container" PID=10877 containerID=81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=2eb20ca2-7784-4fc0-925b-110bb076c447 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:44:40 ip-10-0-136-68 systemd[1]: crio-81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9.scope: Deactivated successfully. Feb 23 18:44:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:44.635012962Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=a298a160-c082-497d-a8a9-0f71b636512f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:44:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:44:44.635952 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9} Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.244425174Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.244476455Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 systemd[1]: run-utsns-2c9976d1\x2de87c\x2d42a3\x2d85e3\x2d67dde86f8b81.mount: Deactivated successfully. Feb 23 18:44:54 ip-10-0-136-68 systemd[1]: run-ipcns-2c9976d1\x2de87c\x2d42a3\x2d85e3\x2d67dde86f8b81.mount: Deactivated successfully. Feb 23 18:44:54 ip-10-0-136-68 systemd[1]: run-netns-2c9976d1\x2de87c\x2d42a3\x2d85e3\x2d67dde86f8b81.mount: Deactivated successfully. Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.260329498Z" level=info msg="runSandbox: deleting pod ID b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c from idIndex" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.260367220Z" level=info msg="runSandbox: removing pod sandbox b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.260398241Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.260413031Z" level=info msg="runSandbox: unmounting shmPath for sandbox b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c-userdata-shm.mount: Deactivated successfully. Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.268314388Z" level=info msg="runSandbox: removing pod sandbox from storage: b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.269983329Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:44:54.270017113Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=58cdb0da-d7d7-4b01-9756-f5e72f591bb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:44:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:54.270235 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:44:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:54.270316 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:44:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:54.270340 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:44:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:54.270398 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b5ed5b60c843ba3e348fbd413b51113cd4d849ac0d9fefaa9d549ab2bfb6670c): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:44:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:44:54.873079 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:44:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:44:54.873135 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:56.292452 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:56.292787 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:56.293025 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:44:56.293064 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:45:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:00.217320 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:00.217876 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:00.218108 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:00.218147 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:45:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:04.872476 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:45:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:04.872539 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.250709367Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.250765276Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.250752135Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.250845183Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 systemd[1]: run-utsns-77b55f5f\x2dc5ee\x2d4622\x2d9c6d\x2d30f964e90bce.mount: Deactivated successfully. Feb 23 18:45:05 ip-10-0-136-68 systemd[1]: run-utsns-13df7a68\x2d2597\x2d47b0\x2d8fad\x2dd266c9a7aa04.mount: Deactivated successfully. Feb 23 18:45:05 ip-10-0-136-68 systemd[1]: run-ipcns-77b55f5f\x2dc5ee\x2d4622\x2d9c6d\x2d30f964e90bce.mount: Deactivated successfully. Feb 23 18:45:05 ip-10-0-136-68 systemd[1]: run-ipcns-13df7a68\x2d2597\x2d47b0\x2d8fad\x2dd266c9a7aa04.mount: Deactivated successfully. Feb 23 18:45:05 ip-10-0-136-68 systemd[1]: run-netns-13df7a68\x2d2597\x2d47b0\x2d8fad\x2dd266c9a7aa04.mount: Deactivated successfully. Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.266339941Z" level=info msg="runSandbox: deleting pod ID 84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71 from idIndex" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.266394984Z" level=info msg="runSandbox: removing pod sandbox 84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 systemd[1]: run-netns-77b55f5f\x2dc5ee\x2d4622\x2d9c6d\x2d30f964e90bce.mount: Deactivated successfully. Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.266438561Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.266459448Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.266350796Z" level=info msg="runSandbox: deleting pod ID 0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5 from idIndex" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.266555244Z" level=info msg="runSandbox: removing pod sandbox 0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.266579032Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.266597815Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.270333882Z" level=info msg="runSandbox: removing pod sandbox from storage: 84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.270338264Z" level=info msg="runSandbox: removing pod sandbox from storage: 0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.272005796Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.272043323Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=f70d907e-d1ac-4990-87dd-0229f697a531 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:05.272399 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:45:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:05.272509 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:45:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:05.272545 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:45:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:05.272643 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.273488680Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:05.273515768Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=5ba8d59d-d86c-4e77-8d06-ceaad0c3d9e9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:05.273690 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:45:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:05.273744 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:45:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:05.273776 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:45:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:05.273851 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:45:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:06.216613 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:45:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:06.217003584Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:06.217062831Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:45:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:06.222684869Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/5e7932f1-e834-4af0-b7d2-8b3fad4b4cb8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:45:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:06.222710326Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:45:06 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0115f29e21ab308b72a15c75943472c835aaad63b7b226803e758a6b6dc46fc5-userdata-shm.mount: Deactivated successfully. Feb 23 18:45:06 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-84d1cd757ff0c3f1d0c5b208c8a2e7c48032e23ed7623f379ff90e709c2d1e71-userdata-shm.mount: Deactivated successfully. Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.244932311Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.244978361Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 systemd[1]: run-utsns-2622e4a1\x2d0748\x2d40b0\x2d8b36\x2d3e981197ee93.mount: Deactivated successfully. Feb 23 18:45:09 ip-10-0-136-68 systemd[1]: run-ipcns-2622e4a1\x2d0748\x2d40b0\x2d8b36\x2d3e981197ee93.mount: Deactivated successfully. Feb 23 18:45:09 ip-10-0-136-68 systemd[1]: run-netns-2622e4a1\x2d0748\x2d40b0\x2d8b36\x2d3e981197ee93.mount: Deactivated successfully. Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.271330884Z" level=info msg="runSandbox: deleting pod ID 47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e from idIndex" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.271376025Z" level=info msg="runSandbox: removing pod sandbox 47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.271427756Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.271448512Z" level=info msg="runSandbox: unmounting shmPath for sandbox 47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e-userdata-shm.mount: Deactivated successfully. Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.277312842Z" level=info msg="runSandbox: removing pod sandbox from storage: 47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.279038534Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:09.279069152Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=86c76abb-027f-4c93-84b0-2cd607036aa5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:09.279311 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:45:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:09.279387 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:45:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:09.279433 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:45:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:09.279535 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(47a9a982fe72310c88f58fae2b54da2084041bcdf02aa5284f97857570dec33e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:45:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:14.873038 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:45:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:14.873094 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:45:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:16.216444 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:16.216833652Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:16.216889037Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:16.222143822Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/c8c3600e-54f8-465f-bd94-5f77c748c118 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:16.222170698Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:45:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:19.216975 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:45:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:19.217418417Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:19.217481411Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:45:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:19.223154919Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/52431699-a249-44ea-9d1c-05853676142c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:45:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:19.223188838Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.200367937Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=cee8f439-de52-40c1-a3df-9683557b0c49 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.200575288Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=cee8f439-de52-40c1-a3df-9683557b0c49 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.247116265Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.247173167Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 systemd[1]: run-utsns-1cf17b2e\x2d1ace\x2d4796\x2d971c\x2d53704cca6464.mount: Deactivated successfully. Feb 23 18:45:20 ip-10-0-136-68 systemd[1]: run-ipcns-1cf17b2e\x2d1ace\x2d4796\x2d971c\x2d53704cca6464.mount: Deactivated successfully. Feb 23 18:45:20 ip-10-0-136-68 systemd[1]: run-netns-1cf17b2e\x2d1ace\x2d4796\x2d971c\x2d53704cca6464.mount: Deactivated successfully. Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.270318247Z" level=info msg="runSandbox: deleting pod ID d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba from idIndex" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.270352964Z" level=info msg="runSandbox: removing pod sandbox d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.270386543Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.270405067Z" level=info msg="runSandbox: unmounting shmPath for sandbox d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba-userdata-shm.mount: Deactivated successfully. Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.277300977Z" level=info msg="runSandbox: removing pod sandbox from storage: d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.278863645Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:20.278892719Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=3c9e6b34-fe23-4d6c-a00a-ea623c653f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:20.279067 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:45:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:20.279113 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:45:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:20.279140 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:45:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:20.279194 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d22eb7b0605175ed4aa35b5029f34049a108a5fde603329986e5e12a493fafba): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:45:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:21.191493360Z" level=info msg="cleanup sandbox network" Feb 23 18:45:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:21.191684640Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS: Networks:[{Name:multus-cni-network Ifname:eth0}] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:45:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:21.191820136Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:45:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:23.216480 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:23.216869296Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:23.216925143Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:23.222518579Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/a68e6c7f-e2c4-4299-838e-5bcd58d3a740 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:23.222553659Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:45:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:24.872024 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:45:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:24.872072 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:45:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:26.291592 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:26.291851 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:26.292041 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:26.292067 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:45:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:33.217126 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:45:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:33.217558170Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:33.217619734Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:45:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:34.872222 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:45:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:34.872310 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:45:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:34.872344 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:45:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:34.872944 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:45:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:34.873130 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9" gracePeriod=30 Feb 23 18:45:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:34.873379820Z" level=info msg="Stopping container: 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9 (timeout: 30s)" id=392b9952-1c78-4592-8460-98512add2965 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:45:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:38.635182908Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=392b9952-1c78-4592-8460-98512add2965 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:45:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-ba011d0989528c786bd7933422e5b1c7d4dbe8a58dfd81537aa50222b16d8a1e-merged.mount: Deactivated successfully. Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.411925443Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=392b9952-1c78-4592-8460-98512add2965 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.414017088Z" level=info msg="Stopped container 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=392b9952-1c78-4592-8460-98512add2965 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.414610803Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=6c01eccb-87d2-4978-b546-4fed0af67a66 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.414782864Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6c01eccb-87d2-4978-b546-4fed0af67a66 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.415298552Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=f250f2ba-9fcf-4c37-9238-0a071516b974 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.415450073Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=f250f2ba-9fcf-4c37-9238-0a071516b974 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.416058962Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=79f32d10-8e32-4aea-9da0-7412171c0494 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.416162123Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.478753566Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=c1623c34-efc4-4ccd-8d1a-2fe830bb5b21 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:45:42 ip-10-0-136-68 systemd[1]: Started crio-conmon-5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837.scope. Feb 23 18:45:42 ip-10-0-136-68 systemd[1]: Started libcontainer container 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837. Feb 23 18:45:42 ip-10-0-136-68 conmon[11027]: conmon 5cf54fc47fba5b00fb32 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:45:42 ip-10-0-136-68 systemd[1]: crio-conmon-5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837.scope: Deactivated successfully. Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.540116667Z" level=info msg="Created container 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=79f32d10-8e32-4aea-9da0-7412171c0494 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.540539812Z" level=info msg="Starting container: 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" id=c2579b69-01c3-47a5-acf0-ae3fb0d6ba20 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:45:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:42.547060278Z" level=info msg="Started container" PID=11039 containerID=5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=c2579b69-01c3-47a5-acf0-ae3fb0d6ba20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:45:42 ip-10-0-136-68 systemd[1]: crio-5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837.scope: Deactivated successfully. Feb 23 18:45:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:46.230140280Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=e30ebad2-ff1e-451f-890a-53246f268b47 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:45:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:46.230902 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9" exitCode=-1 Feb 23 18:45:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:46.230939 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9} Feb 23 18:45:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:46.230968 2199 scope.go:115] "RemoveContainer" containerID="5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" Feb 23 18:45:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:49.978920365Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=d7bbac29-2ed2-45f6-9160-919a607fcebc name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:45:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:50.994046448Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=b66ffd1f-9766-44d7-a482-c926ce580f41 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:45:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:51.234773968Z" level=info msg="NetworkStart: stopping network for sandbox d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:45:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:51.234891825Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/5e7932f1-e834-4af0-b7d2-8b3fad4b4cb8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:45:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:51.234919589Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:45:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:51.234928608Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:45:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:51.234938735Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:45:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:53.740011076Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=9fbbbda8-e316-4a23-87ff-13b26bd49f1a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:45:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:53.740563226Z" level=info msg="Removing container: 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe" id=7907c03a-21f9-4a4f-8e1d-e104d4bfc60e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:45:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:54.731572304Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=ad32a57c-bc1d-4930-9c27-b1b4ddd3971e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:45:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:54.732339 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837} Feb 23 18:45:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:54.872215 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:45:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:45:54.872286 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:56.292496 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:56.292798 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:56.293039 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:45:56.293082 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:45:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:57.489110747Z" level=warning msg="Failed to find container exit file for 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: timed out waiting for the condition" id=7907c03a-21f9-4a4f-8e1d-e104d4bfc60e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:45:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:45:57.501543981Z" level=info msg="Removed container 5977cd64f014617e9eba0befb4f52e9466055d78e4fd24c64d02b28776b270fe: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=7907c03a-21f9-4a4f-8e1d-e104d4bfc60e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:46:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:01.235138389Z" level=info msg="NetworkStart: stopping network for sandbox ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:01.235302081Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/c8c3600e-54f8-465f-bd94-5f77c748c118 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:46:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:01.235363301Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:46:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:01.235375390Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:46:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:01.235385228Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:46:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:01.501065560Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=776584fe-4243-4c73-8adb-1083eb798a95 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:46:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:04.236495671Z" level=info msg="NetworkStart: stopping network for sandbox 97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:04.236612995Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/52431699-a249-44ea-9d1c-05853676142c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:46:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:04.236650011Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:46:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:04.236662908Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:46:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:04.236674964Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:46:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:04.872492 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:46:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:04.872546 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:46:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:05.217131 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:05.217415 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:05.217663 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:05.217714 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:46:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:06.201850854Z" level=error msg="Failed to cleanup (probably retrying): failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aa2f6c1cfe2015e661750ad0d84d67c8d8f79e105601e0a761db6a83ba33b3f1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" Feb 23 18:46:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:06.201899234Z" level=error msg="Retried cleanup function \"cleanup sandbox network\" too often, giving up" Feb 23 18:46:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:06.201911283Z" level=error msg="Cleanup during server startup failed: wait on retry: timed out waiting for the condition" Feb 23 18:46:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:06.202544683Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/a73ca07b-4a5f-461c-ac1a-334b08c4642b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:46:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:06.202571466Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:46:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:08.235635441Z" level=info msg="NetworkStart: stopping network for sandbox 54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:08.235753943Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/a68e6c7f-e2c4-4299-838e-5bcd58d3a740 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:46:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:08.235792097Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:46:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:08.235804637Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:46:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:08.235815540Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:46:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:14.872233 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:46:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:14.872328 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:46:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:24.872314 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:46:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:24.872384 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:46:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:26.291681 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:26.291932 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:26.292148 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:26.292195 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:46:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:34.872765 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:46:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:34.872830 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:46:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:34.872857 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:46:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:34.873432 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:46:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:34.873607 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" gracePeriod=30 Feb 23 18:46:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:34.873849362Z" level=info msg="Stopping container: 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837 (timeout: 30s)" id=921e4a87-dafc-4098-81d5-cd89468211dd name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.245091043Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.245141727Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 systemd[1]: run-utsns-5e7932f1\x2de834\x2d4af0\x2db7d2\x2d8b3fad4b4cb8.mount: Deactivated successfully. Feb 23 18:46:36 ip-10-0-136-68 systemd[1]: run-ipcns-5e7932f1\x2de834\x2d4af0\x2db7d2\x2d8b3fad4b4cb8.mount: Deactivated successfully. Feb 23 18:46:36 ip-10-0-136-68 systemd[1]: run-netns-5e7932f1\x2de834\x2d4af0\x2db7d2\x2d8b3fad4b4cb8.mount: Deactivated successfully. Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.271332190Z" level=info msg="runSandbox: deleting pod ID d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c from idIndex" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.271370930Z" level=info msg="runSandbox: removing pod sandbox d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.271404394Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.271430023Z" level=info msg="runSandbox: unmounting shmPath for sandbox d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c-userdata-shm.mount: Deactivated successfully. Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.277305822Z" level=info msg="runSandbox: removing pod sandbox from storage: d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.278873702Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:36.278902683Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e555088a-d0c1-4b6f-bd65-f579a6fa775c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:36.279137 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:46:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:36.279204 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:46:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:36.279265 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:46:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:36.279343 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d890a27ed0b88066551dd05f0207174259831736647021a1bdbcca16b3ce105c): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:46:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:38.636093142Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=921e4a87-dafc-4098-81d5-cd89468211dd name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:46:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-2cd07eda250392b760b4f1dae9977406729dac4ffef37f84b7e9beb9d27f3bb7-merged.mount: Deactivated successfully. Feb 23 18:46:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:42.429913010Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=921e4a87-dafc-4098-81d5-cd89468211dd name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:46:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:42.431817078Z" level=info msg="Stopped container 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=921e4a87-dafc-4098-81d5-cd89468211dd name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:46:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:42.432383 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:46:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:43.309906376Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=36096951-241a-4ab5-a3fe-3adb59451e69 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.245514244Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.245560747Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 systemd[1]: run-utsns-c8c3600e\x2d54f8\x2d465f\x2dbd94\x2d5f77c748c118.mount: Deactivated successfully. Feb 23 18:46:46 ip-10-0-136-68 systemd[1]: run-ipcns-c8c3600e\x2d54f8\x2d465f\x2dbd94\x2d5f77c748c118.mount: Deactivated successfully. Feb 23 18:46:46 ip-10-0-136-68 systemd[1]: run-netns-c8c3600e\x2d54f8\x2d465f\x2dbd94\x2d5f77c748c118.mount: Deactivated successfully. Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.262316712Z" level=info msg="runSandbox: deleting pod ID ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee from idIndex" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.262347323Z" level=info msg="runSandbox: removing pod sandbox ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.262371124Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.262383180Z" level=info msg="runSandbox: unmounting shmPath for sandbox ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee-userdata-shm.mount: Deactivated successfully. Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.269350010Z" level=info msg="runSandbox: removing pod sandbox from storage: ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.270843145Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:46.270873607Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=c1c313f4-7522-4dfc-abdf-3c212ad65228 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:46.271049 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:46:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:46.271108 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:46:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:46.271145 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:46:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:46.271226 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(ef7e96a15cfda83d4cedd3bb230d5a58fc95ecbdb27ec1295868ed854f6d14ee): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:46:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:47.061073161Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=e0230c02-a89b-4cee-ac7c-051a68109489 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:46:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:47.061969 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" exitCode=-1 Feb 23 18:46:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:47.062008 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837} Feb 23 18:46:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:47.062038 2199 scope.go:115] "RemoveContainer" containerID="81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9" Feb 23 18:46:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:48.063621 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:46:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:48.064144 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.246547777Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.246590781Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 systemd[1]: run-utsns-52431699\x2da249\x2d44ea\x2d9d1c\x2d05853676142c.mount: Deactivated successfully. Feb 23 18:46:49 ip-10-0-136-68 systemd[1]: run-ipcns-52431699\x2da249\x2d44ea\x2d9d1c\x2d05853676142c.mount: Deactivated successfully. Feb 23 18:46:49 ip-10-0-136-68 systemd[1]: run-netns-52431699\x2da249\x2d44ea\x2d9d1c\x2d05853676142c.mount: Deactivated successfully. Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.267332481Z" level=info msg="runSandbox: deleting pod ID 97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044 from idIndex" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.267372595Z" level=info msg="runSandbox: removing pod sandbox 97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.267401394Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.267415751Z" level=info msg="runSandbox: unmounting shmPath for sandbox 97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044-userdata-shm.mount: Deactivated successfully. Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.279311661Z" level=info msg="runSandbox: removing pod sandbox from storage: 97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.280928059Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:49.280962290Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4f90e569-497d-45ee-b11b-24f0392dcba9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:49.281173 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:46:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:49.281239 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:46:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:49.281323 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:46:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:49.281380 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(97185eca3e812aaae87d4d8090ec2088c5ad46c96ea913b2a8a5d46080d7c044): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:46:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:50.822983825Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=41ae3857-ed88-4b9c-9d5d-36c620e8936a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.213745103Z" level=info msg="NetworkStart: stopping network for sandbox ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.213858069Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/a73ca07b-4a5f-461c-ac1a-334b08c4642b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.213887726Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.213899518Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.213908883Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:46:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:51.216881 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.217371402Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.217408721Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.223515749Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/b8232229-3aff-403f-87cd-8cfa1f39f596 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:51.223540106Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.245863930Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.245917097Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 systemd[1]: run-utsns-a68e6c7f\x2de2c4\x2d4299\x2d838e\x2d5bcd58d3a740.mount: Deactivated successfully. Feb 23 18:46:53 ip-10-0-136-68 systemd[1]: run-ipcns-a68e6c7f\x2de2c4\x2d4299\x2d838e\x2d5bcd58d3a740.mount: Deactivated successfully. Feb 23 18:46:53 ip-10-0-136-68 systemd[1]: run-netns-a68e6c7f\x2de2c4\x2d4299\x2d838e\x2d5bcd58d3a740.mount: Deactivated successfully. Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.278361856Z" level=info msg="runSandbox: deleting pod ID 54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591 from idIndex" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.278410064Z" level=info msg="runSandbox: removing pod sandbox 54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.278460716Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.278482067Z" level=info msg="runSandbox: unmounting shmPath for sandbox 54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591-userdata-shm.mount: Deactivated successfully. Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.283309038Z" level=info msg="runSandbox: removing pod sandbox from storage: 54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.284895703Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:53.284927202Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=129f183f-0b41-4484-8926-6969cbb25e17 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:53.285153 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:46:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:53.285230 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:46:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:53.285315 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:46:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:53.285422 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(54ed571cc7589c4a46231bc97be5dcdf38792f6a4462b1b80579768e8cb48591): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:46:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:54.583935442Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=b1ea1669-07a9-460d-a185-56104a74ea3f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:46:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:54.584481553Z" level=info msg="Removing container: 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9" id=a465aebe-f6fc-449f-b459-97777ac5f6d6 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:46:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:56.292370 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:56.292654 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:56.292915 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:46:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:46:56.292947 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:46:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:46:57.217339 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:46:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:57.217739745Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:46:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:57.217796031Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:46:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:57.222848590Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/dc56dd89-ab23-4a21-af24-9e941c03ab41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:46:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:57.222898155Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:46:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:58.344945302Z" level=warning msg="Failed to find container exit file for 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: timed out waiting for the condition" id=a465aebe-f6fc-449f-b459-97777ac5f6d6 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:46:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:46:58.358137217Z" level=info msg="Removed container 81c200251645cf6845a55e86f3b5ace4fc477483a5462066c323e09c598cf6d9: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=a465aebe-f6fc-449f-b459-97777ac5f6d6 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:47:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:47:02.217316 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:47:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:02.217911 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:47:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:02.844046516Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=0f8577e9-2494-4f76-b3c6-b551c166e09a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:47:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:47:04.216685 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:47:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:04.217081764Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:04.217134385Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:47:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:04.222888004Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/acdfde08-3dee-4916-aae0-8c6d73f93d88 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:47:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:04.222921057Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:47:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:47:06.216751 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:47:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:06.217136697Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:06.217201587Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:47:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:06.226533104Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2e14d549-2182-4cdd-9e15-f147ae4343b9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:47:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:06.226559780Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:47:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:47:15.216487 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:47:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:15.216937 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:47:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:25.217420 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:25.217737 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:25.217991 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:25.218036 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:47:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:26.291666 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:26.291932 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:26.292173 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:26.292202 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:47:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:47:30.217702 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:47:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:30.218235 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.224851331Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.224897260Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 systemd[1]: run-utsns-a73ca07b\x2d4a5f\x2d461c\x2dac1a\x2d334b08c4642b.mount: Deactivated successfully. Feb 23 18:47:36 ip-10-0-136-68 systemd[1]: run-ipcns-a73ca07b\x2d4a5f\x2d461c\x2dac1a\x2d334b08c4642b.mount: Deactivated successfully. Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.235894790Z" level=info msg="NetworkStart: stopping network for sandbox ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.236000904Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/b8232229-3aff-403f-87cd-8cfa1f39f596 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.236041253Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.236052973Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.236063544Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:47:36 ip-10-0-136-68 systemd[1]: run-netns-a73ca07b\x2d4a5f\x2d461c\x2dac1a\x2d334b08c4642b.mount: Deactivated successfully. Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.244360232Z" level=info msg="runSandbox: deleting pod ID ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7 from idIndex" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.244490120Z" level=info msg="runSandbox: removing pod sandbox ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.244565144Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.244589590Z" level=info msg="runSandbox: unmounting shmPath for sandbox ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7-userdata-shm.mount: Deactivated successfully. Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.248300038Z" level=info msg="runSandbox: removing pod sandbox from storage: ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.249844976Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:36.249872802Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=af044e51-76e8-4165-9fa8-1562e7b42bb4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:36.250045 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:47:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:36.250106 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:47:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:36.250144 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:47:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:36.250227 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ec94853d75c8e660ec50b72f6ed04be766313f29131b2b5fbb84694f86c4b1d7): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:47:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:42.234317336Z" level=info msg="NetworkStart: stopping network for sandbox 8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:42.234437866Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/dc56dd89-ab23-4a21-af24-9e941c03ab41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:47:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:42.234478543Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:47:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:42.234489256Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:47:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:42.234499270Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:47:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:47:43.216729 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:47:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:43.217125 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:47:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:47:49.217151 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.217616899Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.217684834Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.222741349Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/c2486631-d9c1-41f6-9f82-02f01a7f9eee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.222768211Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.234952222Z" level=info msg="NetworkStart: stopping network for sandbox 2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.235039518Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/acdfde08-3dee-4916-aae0-8c6d73f93d88 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.235066325Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.235074879Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:47:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:49.235081131Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:47:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:51.237745262Z" level=info msg="NetworkStart: stopping network for sandbox 87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:47:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:51.237882104Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2e14d549-2182-4cdd-9e15-f147ae4343b9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:47:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:51.237925338Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:47:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:51.237940051Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:47:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:47:51.237950269Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:47:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:56.292423 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:56.292691 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:56.292947 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:47:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:56.292994 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:47:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:47:57.217163 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:47:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:47:57.217574 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:48:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:11.216716 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:48:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:11.217308 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.246208201Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.246278262Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 systemd[1]: run-utsns-b8232229\x2d3aff\x2d403f\x2d87cd\x2d8cfa1f39f596.mount: Deactivated successfully. Feb 23 18:48:21 ip-10-0-136-68 systemd[1]: run-ipcns-b8232229\x2d3aff\x2d403f\x2d87cd\x2d8cfa1f39f596.mount: Deactivated successfully. Feb 23 18:48:21 ip-10-0-136-68 systemd[1]: run-netns-b8232229\x2d3aff\x2d403f\x2d87cd\x2d8cfa1f39f596.mount: Deactivated successfully. Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.271318819Z" level=info msg="runSandbox: deleting pod ID ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1 from idIndex" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.271348684Z" level=info msg="runSandbox: removing pod sandbox ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.271379854Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.271406087Z" level=info msg="runSandbox: unmounting shmPath for sandbox ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1-userdata-shm.mount: Deactivated successfully. Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.277289265Z" level=info msg="runSandbox: removing pod sandbox from storage: ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.278763923Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:21.278793598Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=d60ee230-5a52-4892-b8f0-ea5a6701634b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:21.278988 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:48:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:21.279055 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:48:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:21.279095 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:48:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:21.279172 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(ceda480adfa9134bac8bb76a47a676e10f3bd097f6f3a078e6aa79b573a5efb1): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:48:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:23.216872 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:48:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:23.217448 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:48:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:26.292240 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:26.292578 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:26.292820 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:26.292861 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.244725711Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.244781187Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 systemd[1]: run-utsns-dc56dd89\x2dab23\x2d4a21\x2daf24\x2d9e941c03ab41.mount: Deactivated successfully. Feb 23 18:48:27 ip-10-0-136-68 systemd[1]: run-ipcns-dc56dd89\x2dab23\x2d4a21\x2daf24\x2d9e941c03ab41.mount: Deactivated successfully. Feb 23 18:48:27 ip-10-0-136-68 systemd[1]: run-netns-dc56dd89\x2dab23\x2d4a21\x2daf24\x2d9e941c03ab41.mount: Deactivated successfully. Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.268330826Z" level=info msg="runSandbox: deleting pod ID 8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60 from idIndex" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.268373110Z" level=info msg="runSandbox: removing pod sandbox 8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.268403095Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.268415153Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60-userdata-shm.mount: Deactivated successfully. Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.274323307Z" level=info msg="runSandbox: removing pod sandbox from storage: 8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.277024336Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:27.277070462Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=b12718f9-e83a-433e-aad0-7407ff38943c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:27.278590 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:48:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:27.278753 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:48:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:27.278789 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:48:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:27.278876 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8383bcde66610edf95805c200e8b940bc0f17df5fd25234a88c0131342b0ee60): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:48:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:32.216500 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:48:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:32.216809571Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:32.216873222Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:48:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:32.222415080Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2467c6f9-11cb-461d-9cb9-8f65fc6d2743 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:48:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:32.222459084Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:48:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:33.217513 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:33.217752 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:33.217943 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:33.217965 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:48:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:34.217349 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:48:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:34.217934 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.234405004Z" level=info msg="NetworkStart: stopping network for sandbox 13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.234505874Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/c2486631-d9c1-41f6-9f82-02f01a7f9eee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.234534596Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.234545566Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.234555419Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.243965188Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.244002899Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 systemd[1]: run-utsns-acdfde08\x2d3dee\x2d4916\x2daae0\x2d8c6d73f93d88.mount: Deactivated successfully. Feb 23 18:48:34 ip-10-0-136-68 systemd[1]: run-ipcns-acdfde08\x2d3dee\x2d4916\x2daae0\x2d8c6d73f93d88.mount: Deactivated successfully. Feb 23 18:48:34 ip-10-0-136-68 systemd[1]: run-netns-acdfde08\x2d3dee\x2d4916\x2daae0\x2d8c6d73f93d88.mount: Deactivated successfully. Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.260323425Z" level=info msg="runSandbox: deleting pod ID 2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971 from idIndex" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.260355704Z" level=info msg="runSandbox: removing pod sandbox 2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.260380650Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.260402094Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971-userdata-shm.mount: Deactivated successfully. Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.266306520Z" level=info msg="runSandbox: removing pod sandbox from storage: 2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.267745112Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:34.267773540Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=76e179f4-b97f-4934-9c36-081d8f1cb77e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:34.267939 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:48:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:34.267980 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:48:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:34.268005 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:48:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:34.268058 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2729eb1e97a733dc87d08b4cc33a99d22df950fc1759d668b4987825e93af971): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.246902076Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.246958675Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 systemd[1]: run-utsns-2e14d549\x2d2182\x2d4cdd\x2d9e15\x2df147ae4343b9.mount: Deactivated successfully. Feb 23 18:48:36 ip-10-0-136-68 systemd[1]: run-ipcns-2e14d549\x2d2182\x2d4cdd\x2d9e15\x2df147ae4343b9.mount: Deactivated successfully. Feb 23 18:48:36 ip-10-0-136-68 systemd[1]: run-netns-2e14d549\x2d2182\x2d4cdd\x2d9e15\x2df147ae4343b9.mount: Deactivated successfully. Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.279342857Z" level=info msg="runSandbox: deleting pod ID 87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145 from idIndex" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.279383682Z" level=info msg="runSandbox: removing pod sandbox 87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.279428997Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.279446270Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145-userdata-shm.mount: Deactivated successfully. Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.285307602Z" level=info msg="runSandbox: removing pod sandbox from storage: 87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.286758157Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:36.286784429Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=b0c4fe88-70c8-4be8-97f9-9192ebb34fc8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:36.287024 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:48:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:36.287084 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:48:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:36.287108 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:48:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:36.287173 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(87317b844e8e3abd9c467ad9231114902302a24f65326ac07ff600b403382145): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:48:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:42.216809 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:42.217089575Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:42.217153497Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:42.222586770Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/d7b7f3d5-ae21-4b7f-a303-2bb1ae466092 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:42.222610767Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:48:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:45.216373 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:48:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:45.216749026Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:45.216813411Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:48:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:45.222685452Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c3ed7294-2862-40c8-9283-dd71477ffdfd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:48:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:45.222723227Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:48:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:47.217085 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:48:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:47.217531 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:48:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:50.216819 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:48:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:50.217221947Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:48:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:50.217314283Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:48:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:50.222983585Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/aafeaa8a-92a2-42f5-ab20-f092369a2e8c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:48:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:48:50.223022496Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:48:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:56.292411 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:56.292683 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:56.292885 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:48:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:56.292906 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:48:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:48:58.216787 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:48:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:48:58.217194 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:49:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:49:11.216848 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:49:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:11.217224 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:17.236436981Z" level=info msg="NetworkStart: stopping network for sandbox 2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:17.236551650Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2467c6f9-11cb-461d-9cb9-8f65fc6d2743 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:17.236578570Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:17.236589144Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:17.236598061Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.243892890Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.243947385Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 systemd[1]: run-utsns-c2486631\x2dd9c1\x2d41f6\x2d9f82\x2d02f01a7f9eee.mount: Deactivated successfully. Feb 23 18:49:19 ip-10-0-136-68 systemd[1]: run-ipcns-c2486631\x2dd9c1\x2d41f6\x2d9f82\x2d02f01a7f9eee.mount: Deactivated successfully. Feb 23 18:49:19 ip-10-0-136-68 systemd[1]: run-netns-c2486631\x2dd9c1\x2d41f6\x2d9f82\x2d02f01a7f9eee.mount: Deactivated successfully. Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.273335837Z" level=info msg="runSandbox: deleting pod ID 13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b from idIndex" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.273366943Z" level=info msg="runSandbox: removing pod sandbox 13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.273405004Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.273427696Z" level=info msg="runSandbox: unmounting shmPath for sandbox 13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b-userdata-shm.mount: Deactivated successfully. Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.277302222Z" level=info msg="runSandbox: removing pod sandbox from storage: 13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.278878045Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:19.278912362Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=5e506fda-de20-44c2-93b3-d618ff291e64 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:19.279123 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:49:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:19.279175 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:49:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:19.279205 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:49:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:19.279338 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13afba62f2311d89acd9dbd26ef1994bfa7590d7d7d50e2a6a9bef7ab66c6e8b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:49:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:49:23.216615 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:49:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:23.217188 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:26.292424 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:26.292675 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:26.292881 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:26.292922 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:49:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:27.236409736Z" level=info msg="NetworkStart: stopping network for sandbox 25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:27.236547319Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/d7b7f3d5-ae21-4b7f-a303-2bb1ae466092 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:49:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:27.236588078Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:49:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:27.236598271Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:49:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:27.236608433Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:49:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:49:30.217133 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.217565696Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.217630458Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.223150018Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/fb3f888d-b439-47b7-9fca-8fbc0e0770c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.223212796Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.234405420Z" level=info msg="NetworkStart: stopping network for sandbox 5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.234502143Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c3ed7294-2862-40c8-9283-dd71477ffdfd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.234538225Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.234549647Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:49:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:30.234560221Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:49:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:49:35.217358 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:49:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:35.217910 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:49:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:35.234383816Z" level=info msg="NetworkStart: stopping network for sandbox 35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:49:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:35.234519327Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/aafeaa8a-92a2-42f5-ab20-f092369a2e8c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:49:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:35.234558723Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:49:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:35.234571465Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:49:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:49:35.234582107Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:49:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:38.217708 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:38.218038 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:38.218309 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:38.218344 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:49:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:49:50.217328 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:49:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:50.217899 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:49:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:56.291724 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:56.291951 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:56.292207 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:49:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:49:56.292268 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:50:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:01.217279 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:50:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:01.217707 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.246875194Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.246922648Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 systemd[1]: run-utsns-2467c6f9\x2d11cb\x2d461d\x2d9cb9\x2d8f65fc6d2743.mount: Deactivated successfully. Feb 23 18:50:02 ip-10-0-136-68 systemd[1]: run-ipcns-2467c6f9\x2d11cb\x2d461d\x2d9cb9\x2d8f65fc6d2743.mount: Deactivated successfully. Feb 23 18:50:02 ip-10-0-136-68 systemd[1]: run-netns-2467c6f9\x2d11cb\x2d461d\x2d9cb9\x2d8f65fc6d2743.mount: Deactivated successfully. Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.265315387Z" level=info msg="runSandbox: deleting pod ID 2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093 from idIndex" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.265349370Z" level=info msg="runSandbox: removing pod sandbox 2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.265373645Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.265385873Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093-userdata-shm.mount: Deactivated successfully. Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.277302929Z" level=info msg="runSandbox: removing pod sandbox from storage: 2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.278813105Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:02.278838663Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=278c71ac-7589-47b2-8cc3-58e21df2f767 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:02.278993 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:50:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:02.279044 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:50:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:02.279067 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:50:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:02.279120 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2b51f68c21a5ef41a1fe8b0c47c922c85c12fda0a13dee9317243112a9dd8093): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.245719477Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.245759149Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 systemd[1]: run-utsns-d7b7f3d5\x2dae21\x2d4b7f\x2da303\x2d2bb1ae466092.mount: Deactivated successfully. Feb 23 18:50:12 ip-10-0-136-68 systemd[1]: run-ipcns-d7b7f3d5\x2dae21\x2d4b7f\x2da303\x2d2bb1ae466092.mount: Deactivated successfully. Feb 23 18:50:12 ip-10-0-136-68 systemd[1]: run-netns-d7b7f3d5\x2dae21\x2d4b7f\x2da303\x2d2bb1ae466092.mount: Deactivated successfully. Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.271335502Z" level=info msg="runSandbox: deleting pod ID 25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833 from idIndex" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.271374791Z" level=info msg="runSandbox: removing pod sandbox 25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.271416061Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.271439160Z" level=info msg="runSandbox: unmounting shmPath for sandbox 25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833-userdata-shm.mount: Deactivated successfully. Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.279303559Z" level=info msg="runSandbox: removing pod sandbox from storage: 25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.280771096Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:12.280799414Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=9211da4f-39b7-4236-9623-77675294154a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:12.280945 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:50:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:12.280992 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:50:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:12.281014 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:50:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:12.281064 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(25d236b706465fa69b07f74e06582370e9cede0842e3e48d881efd54b16bc833): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677178215.0005] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 18:50:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:15.216340 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:50:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:15.216715 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.237782328Z" level=info msg="NetworkStart: stopping network for sandbox abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.237909980Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/fb3f888d-b439-47b7-9fca-8fbc0e0770c9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.237981589Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.237992914Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.238002041Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.242976898Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.243013603Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 systemd[1]: run-utsns-c3ed7294\x2d2862\x2d40c8\x2d9283\x2ddd71477ffdfd.mount: Deactivated successfully. Feb 23 18:50:15 ip-10-0-136-68 systemd[1]: run-ipcns-c3ed7294\x2d2862\x2d40c8\x2d9283\x2ddd71477ffdfd.mount: Deactivated successfully. Feb 23 18:50:15 ip-10-0-136-68 systemd[1]: run-netns-c3ed7294\x2d2862\x2d40c8\x2d9283\x2ddd71477ffdfd.mount: Deactivated successfully. Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.258321864Z" level=info msg="runSandbox: deleting pod ID 5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1 from idIndex" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.258359551Z" level=info msg="runSandbox: removing pod sandbox 5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.258393973Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.258413205Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1-userdata-shm.mount: Deactivated successfully. Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.263317952Z" level=info msg="runSandbox: removing pod sandbox from storage: 5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.265074326Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:15.265103419Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=46bf6269-44d5-4120-9fa6-1a605c976bd5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:15.265300 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:50:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:15.265351 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:50:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:15.265375 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:50:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:15.265432 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(5095fa161ab6859137e868209b7b61a844bd9b7800abbf5378012ebce41170c1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:50:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:18.216756 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:50:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:18.217074247Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:18.217137774Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:50:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:18.223686014Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/bfab49a9-23d8-43dc-9cb2-60c67110db56 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:50:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:18.223719505Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.203086048Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=6313ce9f-4bf5-4faa-8f4d-6ca157b20033 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.203312000Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6313ce9f-4bf5-4faa-8f4d-6ca157b20033 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.244587624Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.244633846Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 systemd[1]: run-utsns-aafeaa8a\x2d92a2\x2d42f5\x2dab20\x2df092369a2e8c.mount: Deactivated successfully. Feb 23 18:50:20 ip-10-0-136-68 systemd[1]: run-ipcns-aafeaa8a\x2d92a2\x2d42f5\x2dab20\x2df092369a2e8c.mount: Deactivated successfully. Feb 23 18:50:20 ip-10-0-136-68 systemd[1]: run-netns-aafeaa8a\x2d92a2\x2d42f5\x2dab20\x2df092369a2e8c.mount: Deactivated successfully. Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.273327821Z" level=info msg="runSandbox: deleting pod ID 35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12 from idIndex" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.273362119Z" level=info msg="runSandbox: removing pod sandbox 35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.273393096Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.273425724Z" level=info msg="runSandbox: unmounting shmPath for sandbox 35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12-userdata-shm.mount: Deactivated successfully. Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.280342440Z" level=info msg="runSandbox: removing pod sandbox from storage: 35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.281799734Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:20.281831225Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=96d36f58-2cf4-4d86-a48a-6fef45056dca name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:20.282012 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:50:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:20.282074 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:50:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:20.282116 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:50:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:20.282203 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(35d8c8c3d2f404d65fac334412de152eeb26dcee8e4a476679536ee12101cc12): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:50:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:25.216820 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:25.217129602Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:25.217193663Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:25.222610772Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/d1597142-53b4-4e0b-b256-f489d7960496 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:25.222645924Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:50:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:26.291760 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:26.292005 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:26.292280 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:26.292302 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:50:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:27.216891 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:50:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:27.217346 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:50:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:29.217088 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:50:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:29.217450532Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:29.217507274Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:50:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:29.223066190Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/6a670a78-97fd-49f4-824b-cefb8ff7da05 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:50:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:29.223102449Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:50:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:33.217108 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:50:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:33.217557968Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:50:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:33.217625849Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:50:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:33.223021236Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/959a145b-aaf9-4eb2-89e3-4d9da2beb14a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:50:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:50:33.223327019Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:50:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:42.217418 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:50:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:42.217966 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:50:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:49.217437 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:49.217704 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:49.217904 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:49.217947 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:50:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:50:54.217327 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:50:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:54.217902 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:50:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:56.292601 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:56.292860 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:56.293075 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:50:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:50:56.293108 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.248029411Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.248080785Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 systemd[1]: run-utsns-fb3f888d\x2db439\x2d47b7\x2d9fca\x2d8fbc0e0770c9.mount: Deactivated successfully. Feb 23 18:51:00 ip-10-0-136-68 systemd[1]: run-ipcns-fb3f888d\x2db439\x2d47b7\x2d9fca\x2d8fbc0e0770c9.mount: Deactivated successfully. Feb 23 18:51:00 ip-10-0-136-68 systemd[1]: run-netns-fb3f888d\x2db439\x2d47b7\x2d9fca\x2d8fbc0e0770c9.mount: Deactivated successfully. Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.272325013Z" level=info msg="runSandbox: deleting pod ID abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd from idIndex" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.272363367Z" level=info msg="runSandbox: removing pod sandbox abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.272392341Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.272405165Z" level=info msg="runSandbox: unmounting shmPath for sandbox abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd-userdata-shm.mount: Deactivated successfully. Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.279312812Z" level=info msg="runSandbox: removing pod sandbox from storage: abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.280919506Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:00.280953207Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f07bc475-a8b3-4c18-b6aa-1b9506d06a31 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:00.281194 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:51:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:00.281364 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:51:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:00.281405 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:51:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:00.281496 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abcf08c2a4de3b8014a92379a9d50c6ba12d462b79f92be4db9804005b8729cd): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:51:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:03.236278207Z" level=info msg="NetworkStart: stopping network for sandbox f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:03.236410637Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/bfab49a9-23d8-43dc-9cb2-60c67110db56 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:51:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:03.236447905Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:51:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:03.236459529Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:51:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:03.236469173Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:51:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:51:05.217081 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:51:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:05.217489 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:10.234114996Z" level=info msg="NetworkStart: stopping network for sandbox 68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:10.234225037Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/d1597142-53b4-4e0b-b256-f489d7960496 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:10.234290200Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:10.234318145Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:10.234328676Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:51:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:51:11.217305 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:51:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:11.217717685Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:11.217783495Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:51:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:11.222844438Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/1663b604-5aec-4aad-8b16-432e08981d79 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:51:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:11.222871212Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:51:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:14.235612190Z" level=info msg="NetworkStart: stopping network for sandbox 544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:14.235720286Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/6a670a78-97fd-49f4-824b-cefb8ff7da05 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:51:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:14.235747715Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:51:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:14.235754570Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:51:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:14.235761037Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:51:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:51:18.216766 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:51:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:18.217226 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:51:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:18.235626610Z" level=info msg="NetworkStart: stopping network for sandbox f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:18.235738586Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/959a145b-aaf9-4eb2-89e3-4d9da2beb14a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:51:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:18.235766538Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:51:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:18.235773251Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:51:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:18.235779464Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:26.291640 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:26.291880 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:26.292082 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:26.292111 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:51:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:51:30.217515 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:51:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:30.218113 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:51:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:51:44.216981 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.217818825Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=993f47ce-79b2-40a2-aa1e-3277ae14d3eb name=/runtime.v1.ImageService/ImageStatus Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.218032470Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=993f47ce-79b2-40a2-aa1e-3277ae14d3eb name=/runtime.v1.ImageService/ImageStatus Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.218641253Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=ecf287c4-4d4c-4d4c-90ac-0b301661aa32 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.218817254Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ecf287c4-4d4c-4d4c-90ac-0b301661aa32 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.219424984Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=4de993b8-bcec-40b4-ac36-24aeaa24792b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.219531545Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:51:44 ip-10-0-136-68 systemd[1]: Started crio-conmon-94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab.scope. Feb 23 18:51:44 ip-10-0-136-68 systemd[1]: Started libcontainer container 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab. Feb 23 18:51:44 ip-10-0-136-68 conmon[11713]: conmon 94809f8942162e9ea979 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:51:44 ip-10-0-136-68 systemd[1]: crio-conmon-94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab.scope: Deactivated successfully. Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.364427045Z" level=info msg="Created container 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=4de993b8-bcec-40b4-ac36-24aeaa24792b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.364908524Z" level=info msg="Starting container: 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab" id=05afd909-9e45-4eb0-b4f0-6431f343ca1d name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:44.371618543Z" level=info msg="Started container" PID=11724 containerID=94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=05afd909-9e45-4eb0-b4f0-6431f343ca1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:51:44 ip-10-0-136-68 systemd[1]: crio-94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab.scope: Deactivated successfully. Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.247846787Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.247897794Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 systemd[1]: run-utsns-bfab49a9\x2d23d8\x2d43dc\x2d9cb2\x2d60c67110db56.mount: Deactivated successfully. Feb 23 18:51:48 ip-10-0-136-68 systemd[1]: run-ipcns-bfab49a9\x2d23d8\x2d43dc\x2d9cb2\x2d60c67110db56.mount: Deactivated successfully. Feb 23 18:51:48 ip-10-0-136-68 systemd[1]: run-netns-bfab49a9\x2d23d8\x2d43dc\x2d9cb2\x2d60c67110db56.mount: Deactivated successfully. Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.266331748Z" level=info msg="runSandbox: deleting pod ID f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750 from idIndex" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.266361407Z" level=info msg="runSandbox: removing pod sandbox f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.266412703Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.266437626Z" level=info msg="runSandbox: unmounting shmPath for sandbox f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750-userdata-shm.mount: Deactivated successfully. Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.271313799Z" level=info msg="runSandbox: removing pod sandbox from storage: f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.272836043Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:48.272871233Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9eda74d7-8d78-4bf6-ab96-3c69e1af9471 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:48.273036 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:51:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:48.273192 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:51:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:48.273216 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:51:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:48.273337 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(f5e695eb07a1b7c1e2f39baf451cafbc2fe34ccc5d7727a369c69cea54b1e750): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:51:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:49.026977891Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=44ede8a5-40a4-4874-8b06-93d4258bfdba name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:51:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:51:49.027921 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab} Feb 23 18:51:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:51:54.872107 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:51:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:51:54.872167 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.244270197Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.244332524Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 systemd[1]: run-utsns-d1597142\x2d53b4\x2d4e0b\x2db256\x2df489d7960496.mount: Deactivated successfully. Feb 23 18:51:55 ip-10-0-136-68 systemd[1]: run-ipcns-d1597142\x2d53b4\x2d4e0b\x2db256\x2df489d7960496.mount: Deactivated successfully. Feb 23 18:51:55 ip-10-0-136-68 systemd[1]: run-netns-d1597142\x2d53b4\x2d4e0b\x2db256\x2df489d7960496.mount: Deactivated successfully. Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.268319110Z" level=info msg="runSandbox: deleting pod ID 68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3 from idIndex" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.268354946Z" level=info msg="runSandbox: removing pod sandbox 68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.268380729Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.268393737Z" level=info msg="runSandbox: unmounting shmPath for sandbox 68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3-userdata-shm.mount: Deactivated successfully. Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.291294380Z" level=info msg="runSandbox: removing pod sandbox from storage: 68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.292880730Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:55.292911048Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=89aa9138-d350-4217-8941-602242d75a39 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:55.293102 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:51:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:55.293167 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:51:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:55.293203 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:51:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:55.293302 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(68bd3cc74f9ae5c32ec7b06dbc47b4e9a1d2c4fdb85fd3afba497eb5bd6a39d3): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:56.217094 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:56.217419 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:56.217635 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:56.217688 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:51:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:56.236371340Z" level=info msg="NetworkStart: stopping network for sandbox 2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:56.236470524Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/1663b604-5aec-4aad-8b16-432e08981d79 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:51:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:56.236496556Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:51:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:56.236507458Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:51:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:56.236514214Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:56.292691 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:56.292943 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:56.293193 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:56.293218 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.244725492Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.244781152Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 systemd[1]: run-utsns-6a670a78\x2d97fd\x2d49f4\x2d824b\x2dcefb8ff7da05.mount: Deactivated successfully. Feb 23 18:51:59 ip-10-0-136-68 systemd[1]: run-ipcns-6a670a78\x2d97fd\x2d49f4\x2d824b\x2dcefb8ff7da05.mount: Deactivated successfully. Feb 23 18:51:59 ip-10-0-136-68 systemd[1]: run-netns-6a670a78\x2d97fd\x2d49f4\x2d824b\x2dcefb8ff7da05.mount: Deactivated successfully. Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.270327182Z" level=info msg="runSandbox: deleting pod ID 544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8 from idIndex" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.270370849Z" level=info msg="runSandbox: removing pod sandbox 544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.270417975Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.270438514Z" level=info msg="runSandbox: unmounting shmPath for sandbox 544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8-userdata-shm.mount: Deactivated successfully. Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.277299993Z" level=info msg="runSandbox: removing pod sandbox from storage: 544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.278866807Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:51:59.278897992Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ead3de2d-7f8d-4243-bbb7-d0683499261d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:51:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:59.279104 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:51:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:59.279159 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:51:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:59.279186 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:51:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:51:59.279271 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(544649dc06d6bf447453024fb340302d5e438d22d1a59c670bf08392e3c04bf8): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:52:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:01.216376 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:01.216752089Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:01.216819306Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:01.222660176Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/434102d4-0262-43f0-ad51-02c7453fa2f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:52:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:01.222691088Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.246001734Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.246056899Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 systemd[1]: run-utsns-959a145b\x2daaf9\x2d4eb2\x2d89e3\x2d4d9da2beb14a.mount: Deactivated successfully. Feb 23 18:52:03 ip-10-0-136-68 systemd[1]: run-ipcns-959a145b\x2daaf9\x2d4eb2\x2d89e3\x2d4d9da2beb14a.mount: Deactivated successfully. Feb 23 18:52:03 ip-10-0-136-68 systemd[1]: run-netns-959a145b\x2daaf9\x2d4eb2\x2d89e3\x2d4d9da2beb14a.mount: Deactivated successfully. Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.268318877Z" level=info msg="runSandbox: deleting pod ID f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a from idIndex" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.268358219Z" level=info msg="runSandbox: removing pod sandbox f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.268387731Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.268404919Z" level=info msg="runSandbox: unmounting shmPath for sandbox f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a-userdata-shm.mount: Deactivated successfully. Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.273326235Z" level=info msg="runSandbox: removing pod sandbox from storage: f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.274979860Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:03.275008492Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=9bc87465-eb11-4d9e-8061-d224d2a29fc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:03.275238 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:52:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:03.275354 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:52:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:03.275396 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:52:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:03.275492 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f404a81e19bc2e97d39f519e837bb08a9cdee2ba317ceec960fadc9b69a8512a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:52:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:04.872271 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:52:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:04.872333 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:52:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:06.216911 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:06.217332732Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:06.217405105Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:06.222882496Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/675378b0-4406-4bd0-915c-99d95d4ff355 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:06.222917697Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:52:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:14.217305 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:52:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:14.217732775Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:14.217797318Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:52:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:14.223620788Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2c284c09-2708-42f8-9eb6-33b6f534841b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:52:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:14.223655749Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:52:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:14.872720 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:52:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:14.872785 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:52:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:18.216568 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:52:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:18.216954345Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:18.217016532Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:52:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:18.222740361Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/1558fdac-5ce0-47cd-9bac-ae225a182eee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:52:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:18.222765164Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:52:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:24.872578 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:52:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:24.872639 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:26.292188 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:26.292466 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:26.292759 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:26.292784 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:52:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:34.872013 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:52:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:34.872076 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:52:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:34.872102 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:52:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:34.872587 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:52:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:34.872739 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab" gracePeriod=30 Feb 23 18:52:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:34.872993336Z" level=info msg="Stopping container: 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab (timeout: 30s)" id=794540cc-6653-4067-ab7c-1d71ab27c865 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:52:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:38.635057446Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=794540cc-6653-4067-ab7c-1d71ab27c865 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:52:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1035f321d05a22150b1e3a6b652a1380c01866489e7a6dee825c8a8e81b5848e-merged.mount: Deactivated successfully. Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.247136739Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.247190141Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 systemd[1]: run-utsns-1663b604\x2d5aec\x2d4aad\x2d8b16\x2d432e08981d79.mount: Deactivated successfully. Feb 23 18:52:41 ip-10-0-136-68 systemd[1]: run-ipcns-1663b604\x2d5aec\x2d4aad\x2d8b16\x2d432e08981d79.mount: Deactivated successfully. Feb 23 18:52:41 ip-10-0-136-68 systemd[1]: run-netns-1663b604\x2d5aec\x2d4aad\x2d8b16\x2d432e08981d79.mount: Deactivated successfully. Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.276331030Z" level=info msg="runSandbox: deleting pod ID 2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080 from idIndex" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.276378296Z" level=info msg="runSandbox: removing pod sandbox 2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.276423291Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.276437457Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080-userdata-shm.mount: Deactivated successfully. Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.282293515Z" level=info msg="runSandbox: removing pod sandbox from storage: 2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.283892431Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:41.283925496Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=b356e65c-d387-4ad6-acde-c9bee73ff13b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:41.284160 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:52:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:41.284224 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:52:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:41.284277 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:52:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:41.284346 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(2f27ba798937fbdaf2039c7fec587a96c99504a830ff55c9cae5a152ee7b7080): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.409068408Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=794540cc-6653-4067-ab7c-1d71ab27c865 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.411566797Z" level=info msg="Stopped container 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=794540cc-6653-4067-ab7c-1d71ab27c865 name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.412327016Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=4a0fff0e-e5c8-4f6e-a834-78a182ab302a name=/runtime.v1.ImageService/ImageStatus Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.412490216Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=4a0fff0e-e5c8-4f6e-a834-78a182ab302a name=/runtime.v1.ImageService/ImageStatus Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.413066383Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=41d04ca4-9f51-4f5c-9d7b-43eae1aabe01 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.413240442Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=41d04ca4-9f51-4f5c-9d7b-43eae1aabe01 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.413917057Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=190ecd6f-e5c6-4fb8-b10b-3e9e399a7529 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.414031359Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:52:42 ip-10-0-136-68 systemd[1]: Started crio-conmon-7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb.scope. Feb 23 18:52:42 ip-10-0-136-68 systemd[1]: Started libcontainer container 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb. Feb 23 18:52:42 ip-10-0-136-68 conmon[11858]: conmon 7d582d34db04db5ed219 : Failed to write to cgroup.event_control Operation not supported Feb 23 18:52:42 ip-10-0-136-68 systemd[1]: crio-conmon-7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb.scope: Deactivated successfully. Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.552181232Z" level=info msg="Created container 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=190ecd6f-e5c6-4fb8-b10b-3e9e399a7529 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.552814168Z" level=info msg="Starting container: 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" id=b7876865-ab99-4d86-8a87-ab2a5e26e396 name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.560715316Z" level=info msg="Started container" PID=11870 containerID=7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=b7876865-ab99-4d86-8a87-ab2a5e26e396 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:52:42 ip-10-0-136-68 systemd[1]: crio-7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb.scope: Deactivated successfully. Feb 23 18:52:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:42.859668503Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=8b74b66a-1030-42e0-918f-77e01739cc0d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:46.234080199Z" level=info msg="NetworkStart: stopping network for sandbox dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:46.234199676Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/434102d4-0262-43f0-ad51-02c7453fa2f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:46.234237792Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:46.234272751Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:46.234282756Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:52:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:46.608935349Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=13a1c65c-775a-4f7e-a191-5d979420cc7e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:52:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:46.609919 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab" exitCode=-1 Feb 23 18:52:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:46.609960 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab} Feb 23 18:52:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:46.609996 2199 scope.go:115] "RemoveContainer" containerID="5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" Feb 23 18:52:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:50.367953910Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=ca995da5-c7f0-4e7a-a5af-bc6f990c90db name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:51.234495893Z" level=info msg="NetworkStart: stopping network for sandbox f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:51.234617089Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/675378b0-4406-4bd0-915c-99d95d4ff355 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:51.234643465Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:51.234651313Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:51.234660282Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:51.363029486Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=f1ef6fa6-d206-4112-8e6b-b8169739810e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:52:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:54.118044435Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=47bfd72e-4101-4f48-baa8-80b801dbfd0d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:52:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:54.118575542Z" level=info msg="Removing container: 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837" id=58d76fc2-43ff-424c-9a33-0359a7d29068 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:52:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:55.101436582Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=edb0eb11-b899-4950-b167-24505273f5d5 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:52:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:55.102343 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb} Feb 23 18:52:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:52:55.217204 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:52:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:55.217640415Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:55.217704678Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:52:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:55.223072241Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f2b4882f-b6a5-4888-b20a-a1fc13131cfc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:52:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:55.223107060Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:56.291930 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:56.292364 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:56.292581 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:52:56.292604 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:52:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:57.867941623Z" level=warning msg="Failed to find container exit file for 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: timed out waiting for the condition" id=58d76fc2-43ff-424c-9a33-0359a7d29068 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:52:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:57.880436463Z" level=info msg="Removed container 5cf54fc47fba5b00fb32ff6e4f9437f388d6a27bc4444e09b8eec51a57779837: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=58d76fc2-43ff-424c-9a33-0359a7d29068 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:52:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:59.235375067Z" level=info msg="NetworkStart: stopping network for sandbox a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:52:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:59.235499850Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2c284c09-2708-42f8-9eb6-33b6f534841b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:52:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:59.235541346Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:52:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:59.235554453Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:52:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:52:59.235565517Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:53:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:01.856118193Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=8078fbfe-314a-4374-8df1-585c074f6578 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:53:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:03.234858533Z" level=info msg="NetworkStart: stopping network for sandbox 2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:03.234984856Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/1558fdac-5ce0-47cd-9bac-ae225a182eee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:53:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:03.235013089Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:53:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:03.235020805Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:53:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:03.235027405Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:53:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:04.872307 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:53:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:04.872367 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:53:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:14.872053 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:53:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:14.872111 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:53:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:24.872804 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:53:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:24.872865 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:26.217344 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:26.217746 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:26.218061 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:26.218100 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:26.292208 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:26.292420 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:26.292666 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:26.292691 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.243702746Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.243751006Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 systemd[1]: run-utsns-434102d4\x2d0262\x2d43f0\x2dad51\x2d02c7453fa2f5.mount: Deactivated successfully. Feb 23 18:53:31 ip-10-0-136-68 systemd[1]: run-ipcns-434102d4\x2d0262\x2d43f0\x2dad51\x2d02c7453fa2f5.mount: Deactivated successfully. Feb 23 18:53:31 ip-10-0-136-68 systemd[1]: run-netns-434102d4\x2d0262\x2d43f0\x2dad51\x2d02c7453fa2f5.mount: Deactivated successfully. Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.262330416Z" level=info msg="runSandbox: deleting pod ID dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2 from idIndex" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.262378061Z" level=info msg="runSandbox: removing pod sandbox dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.262408381Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.262421777Z" level=info msg="runSandbox: unmounting shmPath for sandbox dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2-userdata-shm.mount: Deactivated successfully. Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.268305471Z" level=info msg="runSandbox: removing pod sandbox from storage: dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.269844770Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:31.269872184Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=39b881f2-8c06-476c-86b2-9be51d380ba0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:31.270102 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:53:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:31.270160 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:53:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:31.270187 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:53:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:31.270293 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcc472ac348fbc3e291ffe08566703c28588a4d9c568bcfc976c5dc35cef25a2): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:53:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:34.872197 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:53:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:34.872283 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.244564395Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.244614088Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 systemd[1]: run-utsns-675378b0\x2d4406\x2d4bd0\x2d915c\x2d99d95d4ff355.mount: Deactivated successfully. Feb 23 18:53:36 ip-10-0-136-68 systemd[1]: run-ipcns-675378b0\x2d4406\x2d4bd0\x2d915c\x2d99d95d4ff355.mount: Deactivated successfully. Feb 23 18:53:36 ip-10-0-136-68 systemd[1]: run-netns-675378b0\x2d4406\x2d4bd0\x2d915c\x2d99d95d4ff355.mount: Deactivated successfully. Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.271327953Z" level=info msg="runSandbox: deleting pod ID f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f from idIndex" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.271361056Z" level=info msg="runSandbox: removing pod sandbox f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.271387140Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.271407109Z" level=info msg="runSandbox: unmounting shmPath for sandbox f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f-userdata-shm.mount: Deactivated successfully. Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.278306828Z" level=info msg="runSandbox: removing pod sandbox from storage: f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.279755415Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:36.279782895Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=a88b31ac-ddd9-4a29-a1c5-9a1e0ddd48c3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:36.279958 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:53:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:36.280005 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:53:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:36.280029 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:53:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:36.280085 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f176869610eeebda2aa6153a09d6b8cafd1ee22867dd05972bda274ab28aa46f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:53:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:40.235305758Z" level=info msg="NetworkStart: stopping network for sandbox b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:40.235425251Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f2b4882f-b6a5-4888-b20a-a1fc13131cfc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:53:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:40.235466172Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:53:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:40.235481600Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:53:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:40.235491783Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.245558516Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.245612778Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 systemd[1]: run-utsns-2c284c09\x2d2708\x2d42f8\x2d9eb6\x2d33b6f534841b.mount: Deactivated successfully. Feb 23 18:53:44 ip-10-0-136-68 systemd[1]: run-ipcns-2c284c09\x2d2708\x2d42f8\x2d9eb6\x2d33b6f534841b.mount: Deactivated successfully. Feb 23 18:53:44 ip-10-0-136-68 systemd[1]: run-netns-2c284c09\x2d2708\x2d42f8\x2d9eb6\x2d33b6f534841b.mount: Deactivated successfully. Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.267322673Z" level=info msg="runSandbox: deleting pod ID a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4 from idIndex" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.267359273Z" level=info msg="runSandbox: removing pod sandbox a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.267384728Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.267399171Z" level=info msg="runSandbox: unmounting shmPath for sandbox a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4-userdata-shm.mount: Deactivated successfully. Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.273319784Z" level=info msg="runSandbox: removing pod sandbox from storage: a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.274886109Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.274914134Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=9f8f055b-4350-4b94-bbf9-854223e6acad name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:44.275114 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:44.275166 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:44.275197 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:44.275326 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a9493aa615c85b3f878d3af2b06d10ec4d935b0169301fb27432ce88178becc4): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:44.872913 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:44.872974 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:44.873008 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:44.873540 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:53:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:44.873747 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" gracePeriod=30 Feb 23 18:53:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:44.873965281Z" level=info msg="Stopping container: 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb (timeout: 30s)" id=a4e6223c-b468-434b-a6c6-7c3460601d5e name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:53:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:45.216916 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:53:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:45.217346150Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:45.217412329Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:53:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:45.223047850Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/b96831cd-6a84-47a3-8a3d-f31744e40abe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:53:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:45.223074708Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:53:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:48.217231 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.217795140Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.217862890Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.227701416Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/0eb47b53-e83e-424a-9665-84750be1ae72 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.227727679Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.244049282Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.244087095Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 systemd[1]: run-utsns-1558fdac\x2d5ce0\x2d47cd\x2d9bac\x2dae225a182eee.mount: Deactivated successfully. Feb 23 18:53:48 ip-10-0-136-68 systemd[1]: run-ipcns-1558fdac\x2d5ce0\x2d47cd\x2d9bac\x2dae225a182eee.mount: Deactivated successfully. Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.267327990Z" level=info msg="runSandbox: deleting pod ID 2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88 from idIndex" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.267357494Z" level=info msg="runSandbox: removing pod sandbox 2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.267378386Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.267392883Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.275287899Z" level=info msg="runSandbox: removing pod sandbox from storage: 2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.276612481Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.276644116Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=acf39568-4144-4ec4-bb07-fa0644112a7a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:48.276845 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:53:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:48.276929 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:53:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:48.276979 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:53:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:48.277095 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:53:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:48.634919275Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=a4e6223c-b468-434b-a6c6-7c3460601d5e name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:53:49 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-07500565951fc4c22fe031a43ae8b0874197cb8a89360d080a904204296b4115-merged.mount: Deactivated successfully. Feb 23 18:53:49 ip-10-0-136-68 systemd[1]: run-netns-1558fdac\x2d5ce0\x2d47cd\x2d9bac\x2dae225a182eee.mount: Deactivated successfully. Feb 23 18:53:49 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2fb1c14bf9753066aec855b33367d2f8d234a34c2f095aa33faa456420ccda88-userdata-shm.mount: Deactivated successfully. Feb 23 18:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:52.409167973Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=a4e6223c-b468-434b-a6c6-7c3460601d5e name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:52.411915849Z" level=info msg="Stopped container 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=a4e6223c-b468-434b-a6c6-7c3460601d5e name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:53:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:52.412355 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:52.679910921Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=8caa5482-0cb4-42b9-a0ea-acffef4efeec name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:56.292541 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:56.292837 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:56.293048 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:56.293076 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:53:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:56.430088281Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=6aa91250-f163-4029-9554-2fc7738c9e2d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:53:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:56.430989 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" exitCode=-1 Feb 23 18:53:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:56.431042 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb} Feb 23 18:53:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:56.431083 2199 scope.go:115] "RemoveContainer" containerID="94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab" Feb 23 18:53:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:57.433009 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:53:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:53:57.433493 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:53:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:53:59.216648 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:59.217069913Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:59.217135414Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:59.222507326Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/d4fe97c2-cd19-42f8-9d47-8f58eb21ddd3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:53:59.222543119Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:54:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:00.179025794Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=1849bc20-b9ce-4653-ba1f-7228ae22a6b1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:54:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:54:02.217453 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:54:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:02.217942962Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:02.218011804Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:54:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:02.223831688Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/38095596-e154-48b0-bf49-e735ef687e3a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:54:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:02.223866642Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:03.939022793Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=67de9484-9db8-4108-aadb-53df320d9e6b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:03.939534059Z" level=info msg="Removing container: 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab" id=5b9f1e8c-a584-4704-b49a-6a109da80590 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:54:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:07.700905760Z" level=warning msg="Failed to find container exit file for 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: timed out waiting for the condition" id=5b9f1e8c-a584-4704-b49a-6a109da80590 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:54:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:07.712625686Z" level=info msg="Removed container 94809f8942162e9ea97950283f837b32199cf6a0926883bb55ddcdf0e7a771ab: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=5b9f1e8c-a584-4704-b49a-6a109da80590 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 18:54:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:54:09.216944 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:54:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:09.217522 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:54:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:12.210979803Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=c9a8d0a4-5cfe-4b4d-b9ec-105777f4bb64 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:54:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:54:20.216886 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:54:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:20.217468 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.245160377Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.245213936Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 systemd[1]: run-utsns-f2b4882f\x2db6a5\x2d4888\x2db20a\x2da1fc13131cfc.mount: Deactivated successfully. Feb 23 18:54:25 ip-10-0-136-68 systemd[1]: run-ipcns-f2b4882f\x2db6a5\x2d4888\x2db20a\x2da1fc13131cfc.mount: Deactivated successfully. Feb 23 18:54:25 ip-10-0-136-68 systemd[1]: run-netns-f2b4882f\x2db6a5\x2d4888\x2db20a\x2da1fc13131cfc.mount: Deactivated successfully. Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.269317631Z" level=info msg="runSandbox: deleting pod ID b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183 from idIndex" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.269351664Z" level=info msg="runSandbox: removing pod sandbox b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.269379774Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.269393512Z" level=info msg="runSandbox: unmounting shmPath for sandbox b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183-userdata-shm.mount: Deactivated successfully. Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.275315165Z" level=info msg="runSandbox: removing pod sandbox from storage: b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.276964555Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:25.276995515Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=bf6cddb5-2e25-41eb-9142-c14c520386dc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:25.277203 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:54:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:25.277287 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:54:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:25.277311 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:54:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:25.277372 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b5d1c35994d9223fa286b1b0698511e3211fd080c29b41b23be917e7377a7183): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:26.291607 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:26.291880 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:26.292094 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:26.292125 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:30.235407607Z" level=info msg="NetworkStart: stopping network for sandbox 51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:30.235539180Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/b96831cd-6a84-47a3-8a3d-f31744e40abe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:30.235580227Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:30.235590620Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:30.235600782Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:54:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:33.217609 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:33.217878 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:33.218094 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:33.218124 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:33.239239698Z" level=info msg="NetworkStart: stopping network for sandbox 2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:33.239385129Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/0eb47b53-e83e-424a-9665-84750be1ae72 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:33.239425025Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:33.239436240Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:54:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:33.239447394Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:54:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:54:34.216692 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:54:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:34.217415 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:54:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:54:36.217265 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:54:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:36.217690920Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:36.217763063Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:54:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:36.223438575Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f8960b13-27c8-4994-b55c-f43ee80cd66a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:54:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:36.223464791Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:54:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:44.234274869Z" level=info msg="NetworkStart: stopping network for sandbox 9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:44.234400029Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/d4fe97c2-cd19-42f8-9d47-8f58eb21ddd3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:54:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:44.234435051Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:54:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:44.234446192Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:54:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:44.234455375Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:54:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:54:45.217348 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:54:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:45.217900 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:47.235445345Z" level=info msg="NetworkStart: stopping network for sandbox a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:47.235584685Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/38095596-e154-48b0-bf49-e735ef687e3a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:47.235613892Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:47.235621402Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:54:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:54:47.235631689Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:54:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:54:56.217208 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:56.217842 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:56.292116 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:56.292355 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:56.292592 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:54:56.292620 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:55:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:55:09.216468 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:55:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:09.216883 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.245856866Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.245909897Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 systemd[1]: run-utsns-b96831cd\x2d6a84\x2d47a3\x2d8a3d\x2df31744e40abe.mount: Deactivated successfully. Feb 23 18:55:15 ip-10-0-136-68 systemd[1]: run-ipcns-b96831cd\x2d6a84\x2d47a3\x2d8a3d\x2df31744e40abe.mount: Deactivated successfully. Feb 23 18:55:15 ip-10-0-136-68 systemd[1]: run-netns-b96831cd\x2d6a84\x2d47a3\x2d8a3d\x2df31744e40abe.mount: Deactivated successfully. Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.264345577Z" level=info msg="runSandbox: deleting pod ID 51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3 from idIndex" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.264375730Z" level=info msg="runSandbox: removing pod sandbox 51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.264415024Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.264428068Z" level=info msg="runSandbox: unmounting shmPath for sandbox 51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3-userdata-shm.mount: Deactivated successfully. Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.270312299Z" level=info msg="runSandbox: removing pod sandbox from storage: 51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.271857370Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:15.271884751Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=c8264f57-2a85-4201-9b94-d95b364ccdf7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:15.272075 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:55:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:15.272128 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:55:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:15.272151 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:55:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:15.272202 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(51b5712a3e9b723907d8f62666ce4532e8e46d3865751d7b2720bf2067b981c3): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.249538314Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.249582607Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 systemd[1]: run-utsns-0eb47b53\x2de83e\x2d424a\x2d9665\x2d84750be1ae72.mount: Deactivated successfully. Feb 23 18:55:18 ip-10-0-136-68 systemd[1]: run-ipcns-0eb47b53\x2de83e\x2d424a\x2d9665\x2d84750be1ae72.mount: Deactivated successfully. Feb 23 18:55:18 ip-10-0-136-68 systemd[1]: run-netns-0eb47b53\x2de83e\x2d424a\x2d9665\x2d84750be1ae72.mount: Deactivated successfully. Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.269324419Z" level=info msg="runSandbox: deleting pod ID 2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f from idIndex" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.269358767Z" level=info msg="runSandbox: removing pod sandbox 2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.269381424Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.269395531Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f-userdata-shm.mount: Deactivated successfully. Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.275318337Z" level=info msg="runSandbox: removing pod sandbox from storage: 2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.276840200Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:18.276872669Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=6ffe6a35-7b78-4e99-b22c-b911423c5d57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:18.277051 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:18.277099 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:18.277120 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:55:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:18.277173 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2e9d66b4b6ed9e0ce1deecfc0f09d8b357e4b96cbce3e0dd0008e6cf8d21403f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:55:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:20.206736939Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=acb3b6d8-4b4e-450f-a16b-9216a471465e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:55:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:20.206948040Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=acb3b6d8-4b4e-450f-a16b-9216a471465e name=/runtime.v1.ImageService/ImageStatus Feb 23 18:55:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:55:20.216852 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:55:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:20.217298 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:55:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:21.236793444Z" level=info msg="NetworkStart: stopping network for sandbox adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:21.236904852Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f8960b13-27c8-4994-b55c-f43ee80cd66a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:55:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:21.236931287Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:55:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:21.236938137Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:55:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:21.236946874Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:26.292468 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:26.292755 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:26.292993 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:26.293019 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:55:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:55:29.217611 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.218100234Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.218175996Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.223470452Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5eac3de5-16f0-4fe1-9c75-ec839466d914 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.223505899Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.244054293Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.244095012Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 systemd[1]: run-utsns-d4fe97c2\x2dcd19\x2d42f8\x2d9d47\x2d8f58eb21ddd3.mount: Deactivated successfully. Feb 23 18:55:29 ip-10-0-136-68 systemd[1]: run-ipcns-d4fe97c2\x2dcd19\x2d42f8\x2d9d47\x2d8f58eb21ddd3.mount: Deactivated successfully. Feb 23 18:55:29 ip-10-0-136-68 systemd[1]: run-netns-d4fe97c2\x2dcd19\x2d42f8\x2d9d47\x2d8f58eb21ddd3.mount: Deactivated successfully. Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.269316530Z" level=info msg="runSandbox: deleting pod ID 9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622 from idIndex" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.269354835Z" level=info msg="runSandbox: removing pod sandbox 9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.269393793Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.269423738Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.275294328Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.276845979Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:29.276876800Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=17321afb-2094-4cd1-a4b9-25a472140944 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:29.277106 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:55:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:29.277180 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:55:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:29.277214 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:55:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:29.277353 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:55:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:55:30.216652 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:30.217036183Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:30.217096325Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:30.222727090Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/90ac02c4-5669-410b-a006-9811c6a8baf5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:30.222761196Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:55:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9b81ae75e00456e5734ad474f265846eb3626f4ac7fc19dd749e46fa82cc1622-userdata-shm.mount: Deactivated successfully. Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.245727993Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.245775551Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 systemd[1]: run-utsns-38095596\x2de154\x2d48b0\x2dbf49\x2de735ef687e3a.mount: Deactivated successfully. Feb 23 18:55:32 ip-10-0-136-68 systemd[1]: run-ipcns-38095596\x2de154\x2d48b0\x2dbf49\x2de735ef687e3a.mount: Deactivated successfully. Feb 23 18:55:32 ip-10-0-136-68 systemd[1]: run-netns-38095596\x2de154\x2d48b0\x2dbf49\x2de735ef687e3a.mount: Deactivated successfully. Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.274332522Z" level=info msg="runSandbox: deleting pod ID a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943 from idIndex" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.274371893Z" level=info msg="runSandbox: removing pod sandbox a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.274415038Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.274433420Z" level=info msg="runSandbox: unmounting shmPath for sandbox a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943-userdata-shm.mount: Deactivated successfully. Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.281298704Z" level=info msg="runSandbox: removing pod sandbox from storage: a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.283685039Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:32.283799983Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=62c947f9-b806-4c1f-9518-7d1d63869c6e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:32.284050 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:55:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:32.284116 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:55:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:32.284155 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:55:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:32.284230 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a34129609d14f265fe1fd97061ff2e74cd815fb7e9d6b383520933e92af30943): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:55:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:55:33.217171 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:55:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:33.217687 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:55:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:55:41.217386 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:55:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:41.217784544Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:41.217838236Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:55:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:41.223703640Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/89cf7dc5-1023-4667-ab9b-a07b55b74bbd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:55:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:41.223740238Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:55:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:55:45.217331 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:55:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:55:45.217377 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:55:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:45.217890 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:55:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:45.217798150Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:55:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:45.217868379Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:55:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:45.223342977Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/8eafb514-3c73-4bcc-ad54-473194ae01fe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:55:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:55:45.223380569Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:55:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:50.217867 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:50.218562 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:50.218855 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:50.218922 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:56.292018 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:56.292305 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:56.292528 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:55:56.292567 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:56:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:56:00.216609 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:56:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:00.217139 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.246352115Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.246401658Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 systemd[1]: run-utsns-f8960b13\x2d27c8\x2d4994\x2db55c\x2df43ee80cd66a.mount: Deactivated successfully. Feb 23 18:56:06 ip-10-0-136-68 systemd[1]: run-ipcns-f8960b13\x2d27c8\x2d4994\x2db55c\x2df43ee80cd66a.mount: Deactivated successfully. Feb 23 18:56:06 ip-10-0-136-68 systemd[1]: run-netns-f8960b13\x2d27c8\x2d4994\x2db55c\x2df43ee80cd66a.mount: Deactivated successfully. Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.291338681Z" level=info msg="runSandbox: deleting pod ID adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa from idIndex" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.291374661Z" level=info msg="runSandbox: removing pod sandbox adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.291400857Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.291417691Z" level=info msg="runSandbox: unmounting shmPath for sandbox adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa-userdata-shm.mount: Deactivated successfully. Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.295324501Z" level=info msg="runSandbox: removing pod sandbox from storage: adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.296962818Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:06.296994725Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=74788b00-cb87-493c-939e-a66ba1e3aee0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:06.297196 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:56:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:06.297305 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:56:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:06.297329 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:56:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:06.297402 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(adec9e792f93ca5bbea00eba6213bbb5e33ae1db7d335baadde2cdf55dc99baa): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:14.235202097Z" level=info msg="NetworkStart: stopping network for sandbox e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:14.235344910Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5eac3de5-16f0-4fe1-9c75-ec839466d914 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:14.235382166Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:14.235392733Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:14.235401972Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:56:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:56:15.216686 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:56:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:15.217093 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:15.236603874Z" level=info msg="NetworkStart: stopping network for sandbox 50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:15.236718644Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/90ac02c4-5669-410b-a006-9811c6a8baf5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:15.236758365Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:15.236770529Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:15.236777761Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:56:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:56:20.216759 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:56:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:20.217185930Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:20.217284831Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:56:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:20.222678332Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/846da77f-a7c8-4cc0-8748-f87dcbb2f8f9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:56:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:20.222713308Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:26.235064989Z" level=info msg="NetworkStart: stopping network for sandbox bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:26.235183602Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/89cf7dc5-1023-4667-ab9b-a07b55b74bbd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:26.235210431Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:26.235218390Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:26.235225089Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:26.292106 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:26.292369 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:26.292594 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:26.292632 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:56:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:56:30.216832 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:56:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:30.217401 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:56:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:30.234849727Z" level=info msg="NetworkStart: stopping network for sandbox 48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:30.234987725Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/8eafb514-3c73-4bcc-ad54-473194ae01fe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:56:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:30.235029938Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:56:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:30.235040879Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:56:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:30.235070945Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:56:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:56:43.217331 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:56:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:43.217905 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:56:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:56:54.216559 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:56:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:54.217156 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:56.292656 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:56.292939 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:56.293168 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:56.293203 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.244501321Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.244552673Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 systemd[1]: run-utsns-5eac3de5\x2d16f0\x2d4fe1\x2d9c75\x2dec839466d914.mount: Deactivated successfully. Feb 23 18:56:59 ip-10-0-136-68 systemd[1]: run-ipcns-5eac3de5\x2d16f0\x2d4fe1\x2d9c75\x2dec839466d914.mount: Deactivated successfully. Feb 23 18:56:59 ip-10-0-136-68 systemd[1]: run-netns-5eac3de5\x2d16f0\x2d4fe1\x2d9c75\x2dec839466d914.mount: Deactivated successfully. Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.266364629Z" level=info msg="runSandbox: deleting pod ID e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e from idIndex" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.266405969Z" level=info msg="runSandbox: removing pod sandbox e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.266433127Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.266445996Z" level=info msg="runSandbox: unmounting shmPath for sandbox e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e-userdata-shm.mount: Deactivated successfully. Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.272318320Z" level=info msg="runSandbox: removing pod sandbox from storage: e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.273864134Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:56:59.273891016Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4d8fde7b-7a07-4362-aa03-5bc96f7cfd83 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:59.274104 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:59.274161 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:59.274189 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:56:59.274272 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e5cf5c783db188c146040901e4614cd95bee58b85762db0c16ef2855f70ffd1e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.246620384Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.246669313Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 systemd[1]: run-utsns-90ac02c4\x2d5669\x2d410b\x2da006\x2d9811c6a8baf5.mount: Deactivated successfully. Feb 23 18:57:00 ip-10-0-136-68 systemd[1]: run-ipcns-90ac02c4\x2d5669\x2d410b\x2da006\x2d9811c6a8baf5.mount: Deactivated successfully. Feb 23 18:57:00 ip-10-0-136-68 systemd[1]: run-netns-90ac02c4\x2d5669\x2d410b\x2da006\x2d9811c6a8baf5.mount: Deactivated successfully. Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.281348244Z" level=info msg="runSandbox: deleting pod ID 50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed from idIndex" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.281395217Z" level=info msg="runSandbox: removing pod sandbox 50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.281446982Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.281465580Z" level=info msg="runSandbox: unmounting shmPath for sandbox 50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed-userdata-shm.mount: Deactivated successfully. Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.288315140Z" level=info msg="runSandbox: removing pod sandbox from storage: 50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.289921825Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:00.289956847Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9f4102cb-51d9-4a2e-bf54-7e46b7ecded7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:00.290197 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:00.290306 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:00.290343 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:57:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:00.290423 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(50101602a83ae134b134e72088621c0cceb026dd8656d838ddcc7a12255a7fed): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:57:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:05.236508944Z" level=info msg="NetworkStart: stopping network for sandbox ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:05.236649565Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/846da77f-a7c8-4cc0-8748-f87dcbb2f8f9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:57:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:05.236692241Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:57:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:05.236703783Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:57:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:05.236713978Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:57:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:57:09.217210 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:57:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:09.217776 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.245124070Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.245176879Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 systemd[1]: run-utsns-89cf7dc5\x2d1023\x2d4667\x2dab9b\x2da07b55b74bbd.mount: Deactivated successfully. Feb 23 18:57:11 ip-10-0-136-68 systemd[1]: run-ipcns-89cf7dc5\x2d1023\x2d4667\x2dab9b\x2da07b55b74bbd.mount: Deactivated successfully. Feb 23 18:57:11 ip-10-0-136-68 systemd[1]: run-netns-89cf7dc5\x2d1023\x2d4667\x2dab9b\x2da07b55b74bbd.mount: Deactivated successfully. Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.267322343Z" level=info msg="runSandbox: deleting pod ID bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462 from idIndex" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.267362303Z" level=info msg="runSandbox: removing pod sandbox bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.267392817Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.267405509Z" level=info msg="runSandbox: unmounting shmPath for sandbox bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462-userdata-shm.mount: Deactivated successfully. Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.274321390Z" level=info msg="runSandbox: removing pod sandbox from storage: bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.275964073Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:11.275998906Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=8c66dbbc-c538-48b3-8509-50376a94f08a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:11.276203 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:57:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:11.276319 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:57:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:11.276355 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:57:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:11.276436 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(bffc864c95b6a1c3b2661466531a95aee6a64f4c9f4d9b5b7b0bfad6acf75462): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:57:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:57:13.217472 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:57:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:13.217881 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:13.217851398Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:13.217913915Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:57:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:13.218126 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:13.218353 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:13.218386 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:57:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:13.223560013Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/67fbaa5d-4e3b-426a-9ece-9da219ca6000 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:57:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:13.223586346Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:57:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:57:14.217475 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:14.217924322Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:14.217993712Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:14.223486793Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1aa0f391-3005-4891-994a-5348e4cc65cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:57:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:14.223521000Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.244088364Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.244134935Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 systemd[1]: run-utsns-8eafb514\x2d3c73\x2d4bcc\x2dad54\x2d473194ae01fe.mount: Deactivated successfully. Feb 23 18:57:15 ip-10-0-136-68 systemd[1]: run-ipcns-8eafb514\x2d3c73\x2d4bcc\x2dad54\x2d473194ae01fe.mount: Deactivated successfully. Feb 23 18:57:15 ip-10-0-136-68 systemd[1]: run-netns-8eafb514\x2d3c73\x2d4bcc\x2dad54\x2d473194ae01fe.mount: Deactivated successfully. Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.270329019Z" level=info msg="runSandbox: deleting pod ID 48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9 from idIndex" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.270360780Z" level=info msg="runSandbox: removing pod sandbox 48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.270391590Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.270410265Z" level=info msg="runSandbox: unmounting shmPath for sandbox 48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9-userdata-shm.mount: Deactivated successfully. Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.280300338Z" level=info msg="runSandbox: removing pod sandbox from storage: 48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.281861923Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:15.281893824Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=8584bff8-43c5-4e27-8ddf-e155815f62a8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:15.282089 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:57:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:15.282159 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:57:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:15.282199 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:57:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:15.282315 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(48023b1df4c1e9a1bd2ac2d5a20ea074faef38dff673403dc65565b3ecb19ce9): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:57:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:57:23.217374 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:57:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:23.217934 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:57:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:57:24.217353 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:24.217790387Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:24.217861334Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:24.223591068Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/5df408db-db0d-45c9-b232-9599c8dee799 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:24.223616456Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:26.292616 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:26.292852 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:26.293079 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:26.293110 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:57:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:57:28.217424 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:57:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:28.217866374Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:28.217942809Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:57:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:28.224026227Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/dbcd8fcc-c34b-4b1e-839a-e65079924282 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:57:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:28.224051273Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:57:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:57:37.216883 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:57:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:37.217323 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:57:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:57:49.217235 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:57:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:49.217660 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.246097685Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.246357290Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 systemd[1]: run-utsns-846da77f\x2da7c8\x2d4cc0\x2d8748\x2df87dcbb2f8f9.mount: Deactivated successfully. Feb 23 18:57:50 ip-10-0-136-68 systemd[1]: run-ipcns-846da77f\x2da7c8\x2d4cc0\x2d8748\x2df87dcbb2f8f9.mount: Deactivated successfully. Feb 23 18:57:50 ip-10-0-136-68 systemd[1]: run-netns-846da77f\x2da7c8\x2d4cc0\x2d8748\x2df87dcbb2f8f9.mount: Deactivated successfully. Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.281347102Z" level=info msg="runSandbox: deleting pod ID ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d from idIndex" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.281393532Z" level=info msg="runSandbox: removing pod sandbox ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.281458294Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.281475019Z" level=info msg="runSandbox: unmounting shmPath for sandbox ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d-userdata-shm.mount: Deactivated successfully. Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.285324874Z" level=info msg="runSandbox: removing pod sandbox from storage: ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.286967983Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:50.286997799Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=9687c06d-5188-4895-855f-e0798f9e2d2a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:50.287212 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:57:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:50.287298 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:57:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:50.287324 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:57:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:50.287397 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ae77977ebf1f480788c29a0fd70eefcafecfced18bf774054ef43512b9f17f3d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:56.291945 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:56.292238 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:56.292522 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:57:56.292552 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:57:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:58.234879400Z" level=info msg="NetworkStart: stopping network for sandbox 5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:58.234993113Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/67fbaa5d-4e3b-426a-9ece-9da219ca6000 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:57:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:58.235020638Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:57:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:58.235028892Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:57:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:58.235035669Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:59.235756386Z" level=info msg="NetworkStart: stopping network for sandbox bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:59.235878312Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1aa0f391-3005-4891-994a-5348e4cc65cc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:59.235907005Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:59.235916250Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:57:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:57:59.235923696Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:58:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:02.217180 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:58:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:02.217180 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:58:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:02.217718 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:58:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:02.217832109Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:02.217939415Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:58:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:02.223532699Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/60fda874-530c-4e38-a7df-14010757d383 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:58:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:02.223560374Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:58:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:09.235335201Z" level=info msg="NetworkStart: stopping network for sandbox f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:09.235457485Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/5df408db-db0d-45c9-b232-9599c8dee799 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:58:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:09.235484584Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:58:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:09.235492219Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:58:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:09.235500325Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:58:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:13.235736594Z" level=info msg="NetworkStart: stopping network for sandbox 41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:13.235858714Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/dbcd8fcc-c34b-4b1e-839a-e65079924282 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:58:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:13.235890689Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:58:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:13.235903311Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:58:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:13.235913222Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:58:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:14.216538 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:58:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:14.217542 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:14.217734 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:58:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:14.217898 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:14.218195 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:14.218232 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:58:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:25.216423 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:58:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:25.216814 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:26.292683 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:26.292928 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:26.293154 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:26.293194 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:58:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:40.217118 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:58:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:40.217738 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.244206309Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.244287773Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 systemd[1]: run-utsns-67fbaa5d\x2d4e3b\x2d426a\x2d9ece\x2d9da219ca6000.mount: Deactivated successfully. Feb 23 18:58:43 ip-10-0-136-68 systemd[1]: run-ipcns-67fbaa5d\x2d4e3b\x2d426a\x2d9ece\x2d9da219ca6000.mount: Deactivated successfully. Feb 23 18:58:43 ip-10-0-136-68 systemd[1]: run-netns-67fbaa5d\x2d4e3b\x2d426a\x2d9ece\x2d9da219ca6000.mount: Deactivated successfully. Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.268345612Z" level=info msg="runSandbox: deleting pod ID 5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e from idIndex" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.268392380Z" level=info msg="runSandbox: removing pod sandbox 5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.268426926Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.268438739Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e-userdata-shm.mount: Deactivated successfully. Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.275317105Z" level=info msg="runSandbox: removing pod sandbox from storage: 5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.276969011Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:43.277005123Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=a43c7d45-8424-487f-a859-bd74127fc2ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:43.277240 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:58:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:43.277334 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:58:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:43.277375 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 18:58:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:43.277452 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5025e9c09f4baeff5dd9622ae568f491ffcba889d3412333bc09f3391503c74e): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.244860586Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.244912346Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 systemd[1]: run-utsns-1aa0f391\x2d3005\x2d4891\x2d994a\x2d5348e4cc65cc.mount: Deactivated successfully. Feb 23 18:58:44 ip-10-0-136-68 systemd[1]: run-ipcns-1aa0f391\x2d3005\x2d4891\x2d994a\x2d5348e4cc65cc.mount: Deactivated successfully. Feb 23 18:58:44 ip-10-0-136-68 systemd[1]: run-netns-1aa0f391\x2d3005\x2d4891\x2d994a\x2d5348e4cc65cc.mount: Deactivated successfully. Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.272331547Z" level=info msg="runSandbox: deleting pod ID bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268 from idIndex" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.272377443Z" level=info msg="runSandbox: removing pod sandbox bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.272425392Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.272448098Z" level=info msg="runSandbox: unmounting shmPath for sandbox bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268-userdata-shm.mount: Deactivated successfully. Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.280304351Z" level=info msg="runSandbox: removing pod sandbox from storage: bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.281764644Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:44.281795971Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=11522118-765e-4ff1-9998-9cad926c97a9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:44.281979 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:44.282033 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:44.282058 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:58:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:44.282117 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bfdf41d647de43c50cdb064c3bf4dda4d575f45affb29f73d68488b35bef7268): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 18:58:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:47.235210314Z" level=info msg="NetworkStart: stopping network for sandbox 7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:47.235354719Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/60fda874-530c-4e38-a7df-14010757d383 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:58:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:47.235382180Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:58:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:47.235393376Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:58:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:47.235403962Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.245233315Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.245292785Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 systemd[1]: run-utsns-5df408db\x2ddb0d\x2d45c9\x2db232\x2d9599c8dee799.mount: Deactivated successfully. Feb 23 18:58:54 ip-10-0-136-68 systemd[1]: run-ipcns-5df408db\x2ddb0d\x2d45c9\x2db232\x2d9599c8dee799.mount: Deactivated successfully. Feb 23 18:58:54 ip-10-0-136-68 systemd[1]: run-netns-5df408db\x2ddb0d\x2d45c9\x2db232\x2d9599c8dee799.mount: Deactivated successfully. Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.273339951Z" level=info msg="runSandbox: deleting pod ID f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831 from idIndex" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.273379117Z" level=info msg="runSandbox: removing pod sandbox f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.273406182Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.273435970Z" level=info msg="runSandbox: unmounting shmPath for sandbox f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831-userdata-shm.mount: Deactivated successfully. Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.280318684Z" level=info msg="runSandbox: removing pod sandbox from storage: f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.281825067Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:54.281854135Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=1e3afdf9-c6b1-4c2f-81c8-80b09dcfb869 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:54.282064 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:58:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:54.282136 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:58:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:54.282176 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:58:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:54.282284 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80f57239c8c32d617a29c58678135f36575a34c5b34e4944d360d6299a42831): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 18:58:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:55.217210 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.217864312Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=072bad90-04f3-43d8-8f7a-bc0f51694ab4 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.218060975Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=072bad90-04f3-43d8-8f7a-bc0f51694ab4 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.218695605Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=81567fc7-2351-41a0-8425-adf1a49cf2b0 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.218846397Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=81567fc7-2351-41a0-8425-adf1a49cf2b0 name=/runtime.v1.ImageService/ImageStatus Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.219496201Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=4a152b83-016b-46f3-b1ed-91e469e5a8df name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.219610072Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:58:55 ip-10-0-136-68 systemd[1]: Started crio-conmon-c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1.scope. Feb 23 18:58:55 ip-10-0-136-68 systemd[1]: Started libcontainer container c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1. Feb 23 18:58:55 ip-10-0-136-68 conmon[12581]: conmon c845fbcdc86ac730860d : Failed to write to cgroup.event_control Operation not supported Feb 23 18:58:55 ip-10-0-136-68 systemd[1]: crio-conmon-c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1.scope: Deactivated successfully. Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.361608083Z" level=info msg="Created container c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=4a152b83-016b-46f3-b1ed-91e469e5a8df name=/runtime.v1.RuntimeService/CreateContainer Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.362059193Z" level=info msg="Starting container: c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1" id=e339d8c3-3585-45af-9caa-5f77e5ae9e0f name=/runtime.v1.RuntimeService/StartContainer Feb 23 18:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:55.369063125Z" level=info msg="Started container" PID=12593 containerID=c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=e339d8c3-3585-45af-9caa-5f77e5ae9e0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 18:58:55 ip-10-0-136-68 systemd[1]: crio-c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1.scope: Deactivated successfully. Feb 23 18:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:56.292027 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:56.292467 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:56.292716 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:56.292741 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:58:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:58.217119 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.217647784Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.217713869Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.223977411Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2bf99eea-744b-4fbb-9165-da695e24d6d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.224006337Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.245114632Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.245169085Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 systemd[1]: run-utsns-dbcd8fcc\x2dc34b\x2d4b1e\x2d839a\x2de65079924282.mount: Deactivated successfully. Feb 23 18:58:58 ip-10-0-136-68 systemd[1]: run-ipcns-dbcd8fcc\x2dc34b\x2d4b1e\x2d839a\x2de65079924282.mount: Deactivated successfully. Feb 23 18:58:58 ip-10-0-136-68 systemd[1]: run-netns-dbcd8fcc\x2dc34b\x2d4b1e\x2d839a\x2de65079924282.mount: Deactivated successfully. Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.275341735Z" level=info msg="runSandbox: deleting pod ID 41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918 from idIndex" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.275383691Z" level=info msg="runSandbox: removing pod sandbox 41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.275427716Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.275441378Z" level=info msg="runSandbox: unmounting shmPath for sandbox 41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.280311569Z" level=info msg="runSandbox: removing pod sandbox from storage: 41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.281937256Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:58.281966012Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=88bb3b7e-32e5-46c6-98e1-c77e0b3ceb87 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:58.282191 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:58:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:58.282271 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:58:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:58.282306 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:58:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:58:58.282371 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 18:58:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:59.216905 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 18:58:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:59.217318709Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:58:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:59.217375144Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:58:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:59.222691316Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/950d3bca-a2a3-4094-8c84-7646520d055e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:58:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:59.222724588Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:58:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-41e9102bea0be51f92f02c1b0862624c11553619e2ee9de31dc06be6a4fdc918-userdata-shm.mount: Deactivated successfully. Feb 23 18:58:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:58:59.407106498Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=51eb4bdd-81f1-4b44-9465-7b86cd0e34fa name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 18:58:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:58:59.407913 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1} Feb 23 18:59:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:06.216529 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 18:59:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:06.216985530Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:06.217049381Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:59:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:06.222842604Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8123ec05-ccd4-4e8b-a729-8c305eef8bf1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:59:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:06.222877072Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:59:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:10.216951 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 18:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:10.217501179Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:10.217566553Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:10.223144592Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/08974bd1-e598-49a5-ac4b-2342503c4d8d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:10.223173697Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:59:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:14.873096 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:59:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:14.873165 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:59:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:24.872478 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:59:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:24.872540 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:59:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:25.217327 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:25.217635 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:25.217885 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:25.217923 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:26.292289 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:26.292468 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:26.292660 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:26.292697 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.244673600Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.244723178Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 systemd[1]: run-utsns-60fda874\x2d530c\x2d4e38\x2da7df\x2d14010757d383.mount: Deactivated successfully. Feb 23 18:59:32 ip-10-0-136-68 systemd[1]: run-ipcns-60fda874\x2d530c\x2d4e38\x2da7df\x2d14010757d383.mount: Deactivated successfully. Feb 23 18:59:32 ip-10-0-136-68 systemd[1]: run-netns-60fda874\x2d530c\x2d4e38\x2da7df\x2d14010757d383.mount: Deactivated successfully. Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.285367377Z" level=info msg="runSandbox: deleting pod ID 7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8 from idIndex" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.285417063Z" level=info msg="runSandbox: removing pod sandbox 7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.285471494Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.285492319Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8-userdata-shm.mount: Deactivated successfully. Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.291320841Z" level=info msg="runSandbox: removing pod sandbox from storage: 7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.292963095Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:32.292995549Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=dba64007-5de8-4fbc-bb68-134b812d72d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:32.293509 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 18:59:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:32.293564 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:59:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:32.293602 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:59:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:32.293663 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7efe86806477ed9c26ef7a0242be7960ed567627790317dc4dbd05813a08f8c8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 18:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:34.872035 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:34.872098 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:43.236005037Z" level=info msg="NetworkStart: stopping network for sandbox 8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:43.236128902Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2bf99eea-744b-4fbb-9165-da695e24d6d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:43.236178073Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:43.236190045Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:59:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:43.236200345Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:59:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:44.234036940Z" level=info msg="NetworkStart: stopping network for sandbox e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:44.234149676Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/950d3bca-a2a3-4094-8c84-7646520d055e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:59:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:44.234178734Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:59:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:44.234190345Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:59:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:44.234201671Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:59:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:44.872929 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:59:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:44.872991 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:59:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:46.217199 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 18:59:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:46.217661013Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:46.217733965Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 18:59:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:46.223409913Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/e104aecb-24cd-4727-917d-a4f5fd85b87d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:59:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:46.223436499Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:59:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:51.234681137Z" level=info msg="NetworkStart: stopping network for sandbox 127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:51.234807010Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8123ec05-ccd4-4e8b-a729-8c305eef8bf1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:59:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:51.234835639Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:59:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:51.234847410Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:59:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:51.234856112Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:59:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:54.872196 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 18:59:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:54.872279 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 18:59:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:54.872312 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 18:59:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:54.872900 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 18:59:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 18:59:54.873083 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1" gracePeriod=30 Feb 23 18:59:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:54.873353070Z" level=info msg="Stopping container: c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1 (timeout: 30s)" id=12c29c9d-6d68-4738-989a-e1d3854e0cec name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:59:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:55.236540931Z" level=info msg="NetworkStart: stopping network for sandbox a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 18:59:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:55.236659847Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/08974bd1-e598-49a5-ac4b-2342503c4d8d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 18:59:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:55.236691193Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 18:59:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:55.236698906Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 18:59:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:55.236707063Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 18:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:56.292448 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:56.292694 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:56.292923 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 18:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 18:59:56.292946 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 18:59:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 18:59:58.634084351Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=12c29c9d-6d68-4738-989a-e1d3854e0cec name=/runtime.v1.RuntimeService/StopContainer Feb 23 18:59:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-fdac1359b9640a53bf3b8e79d819460a3d120b246f18419cc7e385139a7c2ed2-merged.mount: Deactivated successfully. Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.405113679Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=12c29c9d-6d68-4738-989a-e1d3854e0cec name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.407391267Z" level=info msg="Stopped container c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=12c29c9d-6d68-4738-989a-e1d3854e0cec name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.408094821Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=275c791c-7263-4def-9df8-cbae0111205b name=/runtime.v1.ImageService/ImageStatus Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.408292011Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=275c791c-7263-4def-9df8-cbae0111205b name=/runtime.v1.ImageService/ImageStatus Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.408845041Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=2c0085c8-221a-40fd-9920-612120530795 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.409016227Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=2c0085c8-221a-40fd-9920-612120530795 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.409681281Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=572166f6-006d-447c-a66a-5e01ce622fbb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.409800013Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:00:02 ip-10-0-136-68 systemd[1]: Started crio-conmon-ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2.scope. Feb 23 19:00:02 ip-10-0-136-68 systemd[1]: Started libcontainer container ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2. Feb 23 19:00:02 ip-10-0-136-68 conmon[12757]: conmon ca58b4954068c8d41522 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:00:02 ip-10-0-136-68 systemd[1]: crio-conmon-ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2.scope: Deactivated successfully. Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.559933052Z" level=info msg="Created container ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=572166f6-006d-447c-a66a-5e01ce622fbb name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.560490298Z" level=info msg="Starting container: ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" id=a2eff635-bcfa-452a-8722-6c0f4327a361 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:02.567500072Z" level=info msg="Started container" PID=12769 containerID=ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=a2eff635-bcfa-452a-8722-6c0f4327a361 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:00:02 ip-10-0-136-68 systemd[1]: crio-ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2.scope: Deactivated successfully. Feb 23 19:00:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:03.250443615Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=16964c8e-15b2-4f76-9d0e-ce8fe50e5ec3 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:00:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:07.000134041Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=abf2f9b4-89f4-4fb6-a2b7-77b566e6b881 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:00:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:07.001017 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1" exitCode=-1 Feb 23 19:00:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:07.001064 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1} Feb 23 19:00:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:07.001096 2199 scope.go:115] "RemoveContainer" containerID="7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" Feb 23 19:00:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:10.762193696Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=41ed20e1-3813-4d1f-a433-181b4e0f2fa9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:00:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:11.742352864Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=976ed0ab-d223-41d3-a605-1e18606dba62 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:00:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:14.511985677Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=d8f41b71-0117-438b-98e1-d628dd5a37e5 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:00:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:14.512594227Z" level=info msg="Removing container: 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb" id=de089a09-ba1a-493a-b96c-c92d7704dde9 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:00:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:15.492032597Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=9f22f548-b386-40c1-b47a-3c3321e2c09b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:00:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:15.493021 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2} Feb 23 19:00:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:18.260994973Z" level=warning msg="Failed to find container exit file for 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: timed out waiting for the condition" id=de089a09-ba1a-493a-b96c-c92d7704dde9 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:00:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:18.285325590Z" level=info msg="Removed container 7d582d34db04db5ed219f6c9b86fc9d1aee44c82e16953b002f30500c4ca50cb: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=de089a09-ba1a-493a-b96c-c92d7704dde9 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:00:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:20.210397448Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=855a904a-96a9-4aa9-9887-6491c6915c91 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:00:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:20.210584702Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=855a904a-96a9-4aa9-9887-6491c6915c91 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:00:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:22.258913355Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=69b223f2-b9f0-4e0c-9959-ebb06dcc0719 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:00:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:24.872928 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:00:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:24.873014 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:26.291771 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:26.291999 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:26.292290 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:26.292315 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.245505329Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.245552498Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 systemd[1]: run-utsns-2bf99eea\x2d744b\x2d4fbb\x2d9165\x2dda695e24d6d7.mount: Deactivated successfully. Feb 23 19:00:28 ip-10-0-136-68 systemd[1]: run-ipcns-2bf99eea\x2d744b\x2d4fbb\x2d9165\x2dda695e24d6d7.mount: Deactivated successfully. Feb 23 19:00:28 ip-10-0-136-68 systemd[1]: run-netns-2bf99eea\x2d744b\x2d4fbb\x2d9165\x2dda695e24d6d7.mount: Deactivated successfully. Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.270340954Z" level=info msg="runSandbox: deleting pod ID 8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e from idIndex" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.270373857Z" level=info msg="runSandbox: removing pod sandbox 8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.270396579Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.270414036Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e-userdata-shm.mount: Deactivated successfully. Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.276301999Z" level=info msg="runSandbox: removing pod sandbox from storage: 8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.277875093Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:28.277911124Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=45fdc266-b54c-4608-8bd1-e64781792e51 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:28.278116 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:28.278178 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:28.278214 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:00:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:28.278313 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8c9a76ad9202c65ac36594c9f2ee9c4dfe06e3b1f9ad08b8eec14628ada3257e): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.243197710Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.243303152Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 systemd[1]: run-utsns-950d3bca\x2da2a3\x2d4094\x2d8c84\x2d7646520d055e.mount: Deactivated successfully. Feb 23 19:00:29 ip-10-0-136-68 systemd[1]: run-ipcns-950d3bca\x2da2a3\x2d4094\x2d8c84\x2d7646520d055e.mount: Deactivated successfully. Feb 23 19:00:29 ip-10-0-136-68 systemd[1]: run-netns-950d3bca\x2da2a3\x2d4094\x2d8c84\x2d7646520d055e.mount: Deactivated successfully. Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.271331870Z" level=info msg="runSandbox: deleting pod ID e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948 from idIndex" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.271367672Z" level=info msg="runSandbox: removing pod sandbox e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.271394888Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.271407073Z" level=info msg="runSandbox: unmounting shmPath for sandbox e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948-userdata-shm.mount: Deactivated successfully. Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.275311642Z" level=info msg="runSandbox: removing pod sandbox from storage: e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.276900071Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:29.276928097Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=f926f839-836b-4bc0-8fa6-d9edf1167a21 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:29.277084 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:00:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:29.277137 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:00:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:29.277159 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:00:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:29.277213 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1aade43eb0249fce63a30c6a39d96d079cbc538e4ac6f85c0947b55904e0948): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:00:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:31.237536077Z" level=info msg="NetworkStart: stopping network for sandbox 273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:31.237659692Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/e104aecb-24cd-4727-917d-a4f5fd85b87d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:00:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:31.237686979Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:00:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:31.237698221Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:00:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:31.237707686Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:00:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:34.872767 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:00:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:34.872822 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:00:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:36.216969 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:36.217321 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:36.217565 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:36.217653 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.244545602Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.244588675Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 systemd[1]: run-utsns-8123ec05\x2dccd4\x2d4e8b\x2da729\x2d8c305eef8bf1.mount: Deactivated successfully. Feb 23 19:00:36 ip-10-0-136-68 systemd[1]: run-ipcns-8123ec05\x2dccd4\x2d4e8b\x2da729\x2d8c305eef8bf1.mount: Deactivated successfully. Feb 23 19:00:36 ip-10-0-136-68 systemd[1]: run-netns-8123ec05\x2dccd4\x2d4e8b\x2da729\x2d8c305eef8bf1.mount: Deactivated successfully. Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.267326975Z" level=info msg="runSandbox: deleting pod ID 127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6 from idIndex" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.267358899Z" level=info msg="runSandbox: removing pod sandbox 127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.267384289Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.267398192Z" level=info msg="runSandbox: unmounting shmPath for sandbox 127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6-userdata-shm.mount: Deactivated successfully. Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.274307647Z" level=info msg="runSandbox: removing pod sandbox from storage: 127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.275844313Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:36.275878686Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7f919216-5f09-4e87-abfd-95ceda3df4f3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:36.276079 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:00:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:36.276131 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:00:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:36.276161 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:00:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:36.276220 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(127e64eafd69a25251aaac3d15dffe95e5f92b2199ecb8dd7c4eb961949b87e6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:00:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:40.217494 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.218191598Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.218289509Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.224291958Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/178f03ca-0cca-44c8-b3c1-81be09f56b44 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.224330997Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.246468451Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.246506379Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 systemd[1]: run-utsns-08974bd1\x2de598\x2d49a5\x2dac4b\x2d2342503c4d8d.mount: Deactivated successfully. Feb 23 19:00:40 ip-10-0-136-68 systemd[1]: run-ipcns-08974bd1\x2de598\x2d49a5\x2dac4b\x2d2342503c4d8d.mount: Deactivated successfully. Feb 23 19:00:40 ip-10-0-136-68 systemd[1]: run-netns-08974bd1\x2de598\x2d49a5\x2dac4b\x2d2342503c4d8d.mount: Deactivated successfully. Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.274324200Z" level=info msg="runSandbox: deleting pod ID a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f from idIndex" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.274355058Z" level=info msg="runSandbox: removing pod sandbox a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.274384653Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.274402319Z" level=info msg="runSandbox: unmounting shmPath for sandbox a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.280300851Z" level=info msg="runSandbox: removing pod sandbox from storage: a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.281899994Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:40.281930233Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=bd400ad8-e016-49fa-aa9a-2bf4ca0b6044 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:40.282149 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:40.282370 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:40.282407 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:40.282497 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:00:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a826b1bbafae750f082c5c15f9e9239a16389b1cecf87cd82437e789ac81c36f-userdata-shm.mount: Deactivated successfully. Feb 23 19:00:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:43.217419 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:00:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:43.217836185Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:43.217909662Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:00:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:43.223290737Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/183c9896-e5cc-4eec-aeef-99d55abceb6e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:00:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:43.223317292Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:00:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:44.872127 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:00:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:44.872194 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:00:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:48.217418 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:00:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:48.217874150Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:48.217938245Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:00:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:48.223642869Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/40928b19-8aed-4534-8e61-73604e1cf561 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:00:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:48.223669067Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:00:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:53.217227 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:00:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:53.217676531Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:00:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:53.217731436Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:00:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:53.223233742Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4e9f31db-678a-438f-ac50-1c47fb5f1f5f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:00:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:00:53.223293968Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:00:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:54.872498 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:00:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:00:54.872557 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:56.291991 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:56.292278 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:56.292528 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:00:56.292570 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:01:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:04.872434 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:01:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:04.872493 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:01:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:04.872521 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:01:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:04.873025 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:01:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:04.873189 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" gracePeriod=30 Feb 23 19:01:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:04.873454698Z" level=info msg="Stopping container: ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2 (timeout: 30s)" id=bc5b57e0-e69a-4b67-a999-b19ae16ed26e name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:01:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:08.635000604Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=bc5b57e0-e69a-4b67-a999-b19ae16ed26e name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:01:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-99173810e0e6302750ecbddeceb86cadbb93db43ad4d013b338bdf3563fb797d-merged.mount: Deactivated successfully. Feb 23 19:01:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:12.421060486Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=bc5b57e0-e69a-4b67-a999-b19ae16ed26e name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:01:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:12.422702996Z" level=info msg="Stopped container ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=bc5b57e0-e69a-4b67-a999-b19ae16ed26e name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:01:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:12.423306 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:01:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:13.082008730Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=e3239c8e-a86f-4539-af5d-1e0ec3fb6850 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.247997374Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.248046575Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 systemd[1]: run-utsns-e104aecb\x2d24cd\x2d4727\x2d917d\x2da4f5fd85b87d.mount: Deactivated successfully. Feb 23 19:01:16 ip-10-0-136-68 systemd[1]: run-ipcns-e104aecb\x2d24cd\x2d4727\x2d917d\x2da4f5fd85b87d.mount: Deactivated successfully. Feb 23 19:01:16 ip-10-0-136-68 systemd[1]: run-netns-e104aecb\x2d24cd\x2d4727\x2d917d\x2da4f5fd85b87d.mount: Deactivated successfully. Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.272328100Z" level=info msg="runSandbox: deleting pod ID 273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d from idIndex" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.272381217Z" level=info msg="runSandbox: removing pod sandbox 273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.272406958Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.272423499Z" level=info msg="runSandbox: unmounting shmPath for sandbox 273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d-userdata-shm.mount: Deactivated successfully. Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.278308031Z" level=info msg="runSandbox: removing pod sandbox from storage: 273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.279946151Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.279982035Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=93857e16-42db-47b3-bf00-cd4771fa7ab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:16.280190 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:01:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:16.280335 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:01:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:16.280377 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:01:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:16.280469 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(273e1410d343690899bca7f0e569817667d8b5c0bec01c4a14e1ca1cb2c54c2d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:01:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:16.842982578Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=f134f634-0be4-4b93-ba62-b608f68c09f2 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:01:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:16.843955 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" exitCode=-1 Feb 23 19:01:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:16.843995 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2} Feb 23 19:01:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:16.844030 2199 scope.go:115] "RemoveContainer" containerID="c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1" Feb 23 19:01:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:17.846157 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:01:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:17.846582 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:01:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:20.602946949Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=2d2338de-6da3-4ad1-92f9-5ac1754a0b5a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:24.351999258Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=7bdd6268-50ac-473d-a8b5-14b73243e02c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:24.352530966Z" level=info msg="Removing container: c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1" id=cbb2e8f2-2989-45bc-99f2-2f82524d3e9f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:01:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:25.236274161Z" level=info msg="NetworkStart: stopping network for sandbox cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:25.236415126Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/178f03ca-0cca-44c8-b3c1-81be09f56b44 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:01:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:25.236451611Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:01:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:25.236462982Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:01:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:25.236475099Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:26.292011 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:26.292303 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:26.292542 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:26.292583 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:01:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:28.101005933Z" level=warning msg="Failed to find container exit file for c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: timed out waiting for the condition" id=cbb2e8f2-2989-45bc-99f2-2f82524d3e9f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:01:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:28.126830642Z" level=info msg="Removed container c845fbcdc86ac730860d4c23091b11690e8b6c016667e3a910c72ed483b9f2b1: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=cbb2e8f2-2989-45bc-99f2-2f82524d3e9f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:01:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:28.236731157Z" level=info msg="NetworkStart: stopping network for sandbox b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:28.236841001Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/183c9896-e5cc-4eec-aeef-99d55abceb6e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:01:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:28.236869730Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:01:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:28.236878067Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:01:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:28.236885206Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:01:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:31.216412 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:01:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:31.216504 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:01:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:31.216846297Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:31.216914309Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:01:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:31.216997 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:01:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:31.222022943Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/4d811255-8012-4f62-9d97-c4aa4cc2810c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:01:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:31.222048129Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:01:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:32.613928102Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=5f445ec4-799a-44a2-bf77-bc4dfb0d0160 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:01:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:33.235746140Z" level=info msg="NetworkStart: stopping network for sandbox 823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:33.235880375Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/40928b19-8aed-4534-8e61-73604e1cf561 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:01:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:33.235919727Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:01:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:33.235930449Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:01:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:33.235939666Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:01:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:38.235209077Z" level=info msg="NetworkStart: stopping network for sandbox b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:01:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:38.235344820Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/4e9f31db-678a-438f-ac50-1c47fb5f1f5f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:01:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:38.235372996Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:01:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:38.235379820Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:01:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:01:38.235386052Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:01:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:39.217514 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:39.217867 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:39.218113 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:39.218166 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:01:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:44.217066 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:01:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:44.217711 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:01:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:01:55.216478 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:01:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:55.216838 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:56.292147 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:56.292490 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:56.292714 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:01:56.292750 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:02:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:02:08.217377 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:02:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:08.218001 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.245667912Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.245714159Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 systemd[1]: run-utsns-178f03ca\x2d0cca\x2d44c8\x2db3c1\x2d81be09f56b44.mount: Deactivated successfully. Feb 23 19:02:10 ip-10-0-136-68 systemd[1]: run-ipcns-178f03ca\x2d0cca\x2d44c8\x2db3c1\x2d81be09f56b44.mount: Deactivated successfully. Feb 23 19:02:10 ip-10-0-136-68 systemd[1]: run-netns-178f03ca\x2d0cca\x2d44c8\x2db3c1\x2d81be09f56b44.mount: Deactivated successfully. Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.266321267Z" level=info msg="runSandbox: deleting pod ID cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e from idIndex" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.266361733Z" level=info msg="runSandbox: removing pod sandbox cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.266399078Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.266420485Z" level=info msg="runSandbox: unmounting shmPath for sandbox cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e-userdata-shm.mount: Deactivated successfully. Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.273313286Z" level=info msg="runSandbox: removing pod sandbox from storage: cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.274908317Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:10.274936284Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=3a5eea45-6bf1-46a7-b829-4f6769d4084c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:10.275113 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:02:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:10.275165 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:02:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:10.275188 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:02:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:10.275266 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(cc7ddc8366607cdc72ebfd79ece05a64264668159151b55934f45cba4434909e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.247036463Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.247087664Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 systemd[1]: run-utsns-183c9896\x2de5cc\x2d4eec\x2daeef\x2d99d55abceb6e.mount: Deactivated successfully. Feb 23 19:02:13 ip-10-0-136-68 systemd[1]: run-ipcns-183c9896\x2de5cc\x2d4eec\x2daeef\x2d99d55abceb6e.mount: Deactivated successfully. Feb 23 19:02:13 ip-10-0-136-68 systemd[1]: run-netns-183c9896\x2de5cc\x2d4eec\x2daeef\x2d99d55abceb6e.mount: Deactivated successfully. Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.271342530Z" level=info msg="runSandbox: deleting pod ID b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084 from idIndex" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.271388915Z" level=info msg="runSandbox: removing pod sandbox b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.271441832Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.271461475Z" level=info msg="runSandbox: unmounting shmPath for sandbox b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084-userdata-shm.mount: Deactivated successfully. Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.278326935Z" level=info msg="runSandbox: removing pod sandbox from storage: b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.279885297Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:13.279913612Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=ffc5c2ea-9647-407d-b884-3307a493af5d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:13.280112 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:02:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:13.280169 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:02:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:13.280195 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:02:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:13.280272 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b4db80d2912add64c172b8a17cfd2ea373bd5add7617360627c2f1f6a2adc084): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:02:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:16.233407886Z" level=info msg="NetworkStart: stopping network for sandbox 7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:16.233515111Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/4d811255-8012-4f62-9d97-c4aa4cc2810c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:02:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:16.233550595Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:02:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:16.233558740Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:02:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:16.233565100Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.245930140Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.245980063Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 systemd[1]: run-utsns-40928b19\x2d8aed\x2d4534\x2d8e61\x2d73604e1cf561.mount: Deactivated successfully. Feb 23 19:02:18 ip-10-0-136-68 systemd[1]: run-ipcns-40928b19\x2d8aed\x2d4534\x2d8e61\x2d73604e1cf561.mount: Deactivated successfully. Feb 23 19:02:18 ip-10-0-136-68 systemd[1]: run-netns-40928b19\x2d8aed\x2d4534\x2d8e61\x2d73604e1cf561.mount: Deactivated successfully. Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.273336340Z" level=info msg="runSandbox: deleting pod ID 823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d from idIndex" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.273373997Z" level=info msg="runSandbox: removing pod sandbox 823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.273421909Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.273445597Z" level=info msg="runSandbox: unmounting shmPath for sandbox 823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d-userdata-shm.mount: Deactivated successfully. Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.279305864Z" level=info msg="runSandbox: removing pod sandbox from storage: 823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.280797951Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:18.280825668Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=a552bf11-d04c-463d-af5a-1cca014b7b63 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:18.281025 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:02:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:18.281091 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:02:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:18.281132 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:02:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:18.281212 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(823a39f8472f8cf8296aad8d773ac7efabef54a003e73215964183a1b32a816d): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:02:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:02:21.217362 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:21.217741256Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:21.217804520Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:21.223341547Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/737d6edb-cefc-48ef-919c-c174c78e87dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:02:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:21.223377095Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:02:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:02:22.216848 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:02:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:22.217331 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.245853511Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.245912283Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 systemd[1]: run-utsns-4e9f31db\x2d678a\x2d438f\x2dac50\x2d1c47fb5f1f5f.mount: Deactivated successfully. Feb 23 19:02:23 ip-10-0-136-68 systemd[1]: run-ipcns-4e9f31db\x2d678a\x2d438f\x2dac50\x2d1c47fb5f1f5f.mount: Deactivated successfully. Feb 23 19:02:23 ip-10-0-136-68 systemd[1]: run-netns-4e9f31db\x2d678a\x2d438f\x2dac50\x2d1c47fb5f1f5f.mount: Deactivated successfully. Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.276348611Z" level=info msg="runSandbox: deleting pod ID b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d from idIndex" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.276395946Z" level=info msg="runSandbox: removing pod sandbox b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.276448180Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.276473479Z" level=info msg="runSandbox: unmounting shmPath for sandbox b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d-userdata-shm.mount: Deactivated successfully. Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.283307457Z" level=info msg="runSandbox: removing pod sandbox from storage: b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.284874921Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:23.284903730Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=2a725d24-df87-4895-a413-260b313e669c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:23.285135 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:02:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:23.285199 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:02:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:23.285227 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:02:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:23.285326 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b05f8d4f48363235e412ceb98991480af6dfbb357c0652dc9e311516a603a96d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:26.292026 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:26.292335 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:26.292563 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:26.292586 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:02:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:02:27.217065 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:02:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:27.217505415Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:27.217576042Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:02:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:27.223200200Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c7b93518-2f30-4f42-a080-5a635a57143c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:02:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:27.223234835Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:02:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:02:29.216994 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:02:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:29.217438116Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:29.217494399Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:02:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:29.222920394Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/71f670c4-77ba-45f1-8e07-cb10d8563b41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:02:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:29.222949042Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:02:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:02:33.217180 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:02:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:33.217779 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:02:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:02:35.216630 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:02:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:35.217050364Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:02:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:35.217115736Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:02:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:35.222488080Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/e1e79451-f6bc-4184-94c2-ecd484fab29c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:02:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:02:35.222513088Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:02:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:02:48.216636 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:02:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:48.217229 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:56.292527 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:56.292763 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:56.292959 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:56.292981 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:02:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:59.217315 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:59.217620 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:59.218269 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:02:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:02:59.218430 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:03:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:03:01.217105 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:03:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:01.217714 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.243435870Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.243489723Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 systemd[1]: run-utsns-4d811255\x2d8012\x2d4f62\x2d9d97\x2dc4aa4cc2810c.mount: Deactivated successfully. Feb 23 19:03:01 ip-10-0-136-68 systemd[1]: run-ipcns-4d811255\x2d8012\x2d4f62\x2d9d97\x2dc4aa4cc2810c.mount: Deactivated successfully. Feb 23 19:03:01 ip-10-0-136-68 systemd[1]: run-netns-4d811255\x2d8012\x2d4f62\x2d9d97\x2dc4aa4cc2810c.mount: Deactivated successfully. Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.273344970Z" level=info msg="runSandbox: deleting pod ID 7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36 from idIndex" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.273392840Z" level=info msg="runSandbox: removing pod sandbox 7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.273437665Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.273451803Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36-userdata-shm.mount: Deactivated successfully. Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.282304410Z" level=info msg="runSandbox: removing pod sandbox from storage: 7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.284181506Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:01.284212641Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=00b41602-b444-48d1-8575-13b1325fe583 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:01.284449 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:03:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:01.284507 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:03:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:01.284534 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:03:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:01.284604 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7f1903509aa89839f53acea7c5898ac5b4785b6563930157c6044b5c2ee97d36): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:06.235342923Z" level=info msg="NetworkStart: stopping network for sandbox a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:06.235452348Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/737d6edb-cefc-48ef-919c-c174c78e87dc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:06.235481721Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:06.235490656Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:03:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:06.235497023Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:03:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:12.235561644Z" level=info msg="NetworkStart: stopping network for sandbox fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:12.235702619Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c7b93518-2f30-4f42-a080-5a635a57143c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:03:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:12.235742754Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:03:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:12.235754102Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:03:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:12.235763637Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:03:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:14.234549578Z" level=info msg="NetworkStart: stopping network for sandbox 7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:14.234654902Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/71f670c4-77ba-45f1-8e07-cb10d8563b41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:03:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:14.234681144Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:03:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:14.234688004Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:03:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:14.234694544Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:03:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:03:16.217016 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:03:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:03:16.217155 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:03:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:16.217638 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:03:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:16.217723702Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:16.217808471Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:03:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:16.223524267Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/98029232-e0d8-48bd-8154-9e6f36d151c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:03:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:16.223549398Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:03:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:20.235740625Z" level=info msg="NetworkStart: stopping network for sandbox 8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:20.235861760Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/e1e79451-f6bc-4184-94c2-ecd484fab29c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:03:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:20.235901002Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:03:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:20.235911923Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:03:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:20.235922436Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:26.292382 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:26.292670 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:26.292896 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:26.292919 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:03:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:03:29.217324 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:03:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:29.217873 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:03:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:03:40.217712 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:03:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:40.218274 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:03:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:03:51.217076 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:51.217677 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.245145449Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.245193035Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 systemd[1]: run-utsns-737d6edb\x2dcefc\x2d48ef\x2d919c\x2dc174c78e87dc.mount: Deactivated successfully. Feb 23 19:03:51 ip-10-0-136-68 systemd[1]: run-ipcns-737d6edb\x2dcefc\x2d48ef\x2d919c\x2dc174c78e87dc.mount: Deactivated successfully. Feb 23 19:03:51 ip-10-0-136-68 systemd[1]: run-netns-737d6edb\x2dcefc\x2d48ef\x2d919c\x2dc174c78e87dc.mount: Deactivated successfully. Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.264330485Z" level=info msg="runSandbox: deleting pod ID a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c from idIndex" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.264369105Z" level=info msg="runSandbox: removing pod sandbox a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.264398081Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.264412657Z" level=info msg="runSandbox: unmounting shmPath for sandbox a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c-userdata-shm.mount: Deactivated successfully. Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.268305635Z" level=info msg="runSandbox: removing pod sandbox from storage: a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.269800730Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:51.269830705Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=333d355b-2931-407d-8515-50d90889ce45 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:51.270033 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:51.270091 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:51.270119 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:51.270174 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a7d9d939e3899659e7d88097283056bdde4988f6ccbad465681589030bdad53c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:56.292433 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:56.292712 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:56.292926 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:56.292956 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.244693946Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.244748207Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 systemd[1]: run-utsns-c7b93518\x2d2f30\x2d4f42\x2da080\x2d5a635a57143c.mount: Deactivated successfully. Feb 23 19:03:57 ip-10-0-136-68 systemd[1]: run-ipcns-c7b93518\x2d2f30\x2d4f42\x2da080\x2d5a635a57143c.mount: Deactivated successfully. Feb 23 19:03:57 ip-10-0-136-68 systemd[1]: run-netns-c7b93518\x2d2f30\x2d4f42\x2da080\x2d5a635a57143c.mount: Deactivated successfully. Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.280338482Z" level=info msg="runSandbox: deleting pod ID fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f from idIndex" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.280383489Z" level=info msg="runSandbox: removing pod sandbox fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.280429436Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.280442801Z" level=info msg="runSandbox: unmounting shmPath for sandbox fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f-userdata-shm.mount: Deactivated successfully. Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.288312707Z" level=info msg="runSandbox: removing pod sandbox from storage: fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.289854879Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:57.289890596Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e6320b42-1937-43ce-babc-76642c5f1d1c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:57.290122 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:03:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:57.290191 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:03:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:57.290238 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:03:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:57.290333 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(fb3fcedacac4e546104cc6b74d4d595d9b6e59e56e6dae1f396c04a68cdaf17f): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.243731382Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.243778464Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 systemd[1]: run-utsns-71f670c4\x2d77ba\x2d45f1\x2d8e07\x2dcb10d8563b41.mount: Deactivated successfully. Feb 23 19:03:59 ip-10-0-136-68 systemd[1]: run-ipcns-71f670c4\x2d77ba\x2d45f1\x2d8e07\x2dcb10d8563b41.mount: Deactivated successfully. Feb 23 19:03:59 ip-10-0-136-68 systemd[1]: run-netns-71f670c4\x2d77ba\x2d45f1\x2d8e07\x2dcb10d8563b41.mount: Deactivated successfully. Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.269316640Z" level=info msg="runSandbox: deleting pod ID 7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9 from idIndex" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.269347326Z" level=info msg="runSandbox: removing pod sandbox 7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.269380775Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.269409300Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9-userdata-shm.mount: Deactivated successfully. Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.273312264Z" level=info msg="runSandbox: removing pod sandbox from storage: 7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.274804895Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:03:59.274833492Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=d8d7e8eb-9b93-444b-a290-87cd81ec46dd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:03:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:59.275023 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:03:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:59.275078 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:03:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:59.275099 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:03:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:03:59.275164 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(7b6e40189cb678395f647a231269d38577aa9bc3599a1b47f9ea4e2bce0759f9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:04:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:01.235160912Z" level=info msg="NetworkStart: stopping network for sandbox 5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:01.235304707Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/98029232-e0d8-48bd-8154-9e6f36d151c6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:01.235337337Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:04:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:01.235345077Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:04:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:01.235353287Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:04:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:03.216675 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:04:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:03.217101446Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:03.217159789Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:04:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:03.222385254Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/414d1e19-cb75-427e-a95b-851675eddc97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:03.222410714Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:04:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:05.217281 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:04:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:05.217697 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.245373345Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.245422594Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 systemd[1]: run-utsns-e1e79451\x2df6bc\x2d4184\x2d94c2\x2decd484fab29c.mount: Deactivated successfully. Feb 23 19:04:05 ip-10-0-136-68 systemd[1]: run-ipcns-e1e79451\x2df6bc\x2d4184\x2d94c2\x2decd484fab29c.mount: Deactivated successfully. Feb 23 19:04:05 ip-10-0-136-68 systemd[1]: run-netns-e1e79451\x2df6bc\x2d4184\x2d94c2\x2decd484fab29c.mount: Deactivated successfully. Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.267329999Z" level=info msg="runSandbox: deleting pod ID 8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6 from idIndex" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.267368051Z" level=info msg="runSandbox: removing pod sandbox 8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.267397720Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.267415421Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6-userdata-shm.mount: Deactivated successfully. Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.274319460Z" level=info msg="runSandbox: removing pod sandbox from storage: 8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.275935688Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:05.275968196Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=5dddcbac-bfc3-4bd8-b877-6e4e22276918 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:05.276201 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:04:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:05.276287 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:04:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:05.276329 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:04:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:05.276390 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(8f8723aaa204c973862dbe7f6b6195cabb5870618cd52535a3448ec754bbd9f6): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:04:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:12.217235 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:04:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:12.217680889Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:12.217747423Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:04:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:12.223775116Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/38265218-1ead-4af9-9819-6afa0f623205 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:12.223800564Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:04:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:14.217203 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:14.217635411Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:14.217702276Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:14.223635535Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/0a1f76d5-7fed-4afd-9199-a7643b90317d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:14.223691879Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:04:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:17.217304 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:04:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:17.217412 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:04:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:17.217786919Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:17.217878146Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:04:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:17.217912 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:04:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:17.223387765Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5fd49aa8-ff9b-44ca-a748-0842d90d50d2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:17.223426803Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:04:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:21.217200 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:21.217520 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:21.217802 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:21.217837 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:26.292680 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:26.292984 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:26.293215 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:26.293275 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:04:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:29.216808 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:04:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:29.217203 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:04:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:44.217005 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:04:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:44.217462 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.245199880Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.245266550Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 systemd[1]: run-utsns-98029232\x2de0d8\x2d48bd\x2d8154\x2d9e6f36d151c6.mount: Deactivated successfully. Feb 23 19:04:46 ip-10-0-136-68 systemd[1]: run-ipcns-98029232\x2de0d8\x2d48bd\x2d8154\x2d9e6f36d151c6.mount: Deactivated successfully. Feb 23 19:04:46 ip-10-0-136-68 systemd[1]: run-netns-98029232\x2de0d8\x2d48bd\x2d8154\x2d9e6f36d151c6.mount: Deactivated successfully. Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.279339200Z" level=info msg="runSandbox: deleting pod ID 5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c from idIndex" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.279376287Z" level=info msg="runSandbox: removing pod sandbox 5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.279408910Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.279429518Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c-userdata-shm.mount: Deactivated successfully. Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.287311200Z" level=info msg="runSandbox: removing pod sandbox from storage: 5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.288863276Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:46.288894395Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=6bd33d3a-64e7-4558-99ad-9217ffb5ac7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:46.289080 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:04:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:46.289131 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:04:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:46.289172 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:04:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:46.289227 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5330aae39f3c43b4c47e51f9cb55b230284ac0047ee4bab3b56118f55bc4135c): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:04:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:48.235752613Z" level=info msg="NetworkStart: stopping network for sandbox 1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:48.235868558Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/414d1e19-cb75-427e-a95b-851675eddc97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:48.235905853Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:04:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:48.235917323Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:04:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:48.235930070Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:56.291765 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:56.292062 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:56.292315 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:56.292347 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:04:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:57.236909451Z" level=info msg="NetworkStart: stopping network for sandbox 98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:57.237055527Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/38265218-1ead-4af9-9819-6afa0f623205 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:57.237096453Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:04:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:57.237107879Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:04:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:57.237118818Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:04:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:58.216662 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:04:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:04:58.216729 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:04:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:04:58.217233 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:04:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:58.217218545Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:58.217489703Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:04:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:58.223399205Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/7422ada4-bea7-4a5e-9423-e34b59466a2f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:58.223422624Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:04:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:59.235210041Z" level=info msg="NetworkStart: stopping network for sandbox f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:04:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:59.235380228Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/0a1f76d5-7fed-4afd-9199-a7643b90317d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:04:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:59.235448208Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:04:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:59.235462779Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:04:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:04:59.235472631Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:05:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:02.236980783Z" level=info msg="NetworkStart: stopping network for sandbox f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:02.237112867Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5fd49aa8-ff9b-44ca-a748-0842d90d50d2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:05:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:02.237151783Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:05:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:02.237165590Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:05:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:02.237174974Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:05:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:05:09.217353 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:05:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:09.217927 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:05:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:20.213382042Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=4577ebaf-9605-4490-8cfa-fe1dff0c70be name=/runtime.v1.ImageService/ImageStatus Feb 23 19:05:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:20.213589605Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=4577ebaf-9605-4490-8cfa-fe1dff0c70be name=/runtime.v1.ImageService/ImageStatus Feb 23 19:05:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:05:24.217032 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:05:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:24.217604 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:26.292115 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:26.292405 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:26.292615 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:26.292640 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.245753114Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.245806409Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 systemd[1]: run-utsns-414d1e19\x2dcb75\x2d427e\x2da95b\x2d851675eddc97.mount: Deactivated successfully. Feb 23 19:05:33 ip-10-0-136-68 systemd[1]: run-ipcns-414d1e19\x2dcb75\x2d427e\x2da95b\x2d851675eddc97.mount: Deactivated successfully. Feb 23 19:05:33 ip-10-0-136-68 systemd[1]: run-netns-414d1e19\x2dcb75\x2d427e\x2da95b\x2d851675eddc97.mount: Deactivated successfully. Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.270346543Z" level=info msg="runSandbox: deleting pod ID 1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e from idIndex" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.270387149Z" level=info msg="runSandbox: removing pod sandbox 1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.270433922Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.270448173Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e-userdata-shm.mount: Deactivated successfully. Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.276320086Z" level=info msg="runSandbox: removing pod sandbox from storage: 1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.277887949Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:33.277917335Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=84253131-d3b5-4e2c-a630-48d7039be4a5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:33.278143 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:05:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:33.278209 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:05:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:33.278231 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:05:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:33.278327 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b77ee52562fc936238b1c1e4c4435eedd234a1bf52b1e36ba1b8d52724b860e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:36.217286 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:05:36.217337 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:36.218421 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:36.218613 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:36.218951 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:36.218984 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.247153361Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.247193376Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 systemd[1]: run-utsns-38265218\x2d1ead\x2d4af9\x2d9819\x2d6afa0f623205.mount: Deactivated successfully. Feb 23 19:05:42 ip-10-0-136-68 systemd[1]: run-ipcns-38265218\x2d1ead\x2d4af9\x2d9819\x2d6afa0f623205.mount: Deactivated successfully. Feb 23 19:05:42 ip-10-0-136-68 systemd[1]: run-netns-38265218\x2d1ead\x2d4af9\x2d9819\x2d6afa0f623205.mount: Deactivated successfully. Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.271320601Z" level=info msg="runSandbox: deleting pod ID 98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f from idIndex" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.271354046Z" level=info msg="runSandbox: removing pod sandbox 98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.271379421Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.271391012Z" level=info msg="runSandbox: unmounting shmPath for sandbox 98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f-userdata-shm.mount: Deactivated successfully. Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.277312926Z" level=info msg="runSandbox: removing pod sandbox from storage: 98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.278866022Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:42.278896724Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=c842dacc-c474-4ca3-ac49-45f9e3fab1a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:42.279085 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:05:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:42.279131 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:05:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:42.279160 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:05:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:42.279217 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(98ccc1cc39866c2510c271ed0e41121fc9db5dbc465f9a0f7556df895a32c61f): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:43.236795282Z" level=info msg="NetworkStart: stopping network for sandbox e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:43.236922321Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/7422ada4-bea7-4a5e-9423-e34b59466a2f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:43.236953692Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:43.236965897Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:05:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:43.236973503Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.244531340Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.244581631Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 systemd[1]: run-utsns-0a1f76d5\x2d7fed\x2d4afd\x2d9199\x2da7643b90317d.mount: Deactivated successfully. Feb 23 19:05:44 ip-10-0-136-68 systemd[1]: run-ipcns-0a1f76d5\x2d7fed\x2d4afd\x2d9199\x2da7643b90317d.mount: Deactivated successfully. Feb 23 19:05:44 ip-10-0-136-68 systemd[1]: run-netns-0a1f76d5\x2d7fed\x2d4afd\x2d9199\x2da7643b90317d.mount: Deactivated successfully. Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.274339883Z" level=info msg="runSandbox: deleting pod ID f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b from idIndex" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.274373775Z" level=info msg="runSandbox: removing pod sandbox f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.274400015Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.274426015Z" level=info msg="runSandbox: unmounting shmPath for sandbox f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b-userdata-shm.mount: Deactivated successfully. Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.280304928Z" level=info msg="runSandbox: removing pod sandbox from storage: f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.281738062Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:44.281765713Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4b6238a7-9463-4220-a457-4b96617fa781 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:44.281941 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:05:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:44.281989 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:05:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:44.282011 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:05:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:44.282069 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f7258fc2da8e43d3574f20aeea0ef470f954eb1031cc760eb1faf159846c910b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.247688752Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.247740973Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 systemd[1]: run-utsns-5fd49aa8\x2dff9b\x2d44ca\x2da748\x2d0842d90d50d2.mount: Deactivated successfully. Feb 23 19:05:47 ip-10-0-136-68 systemd[1]: run-ipcns-5fd49aa8\x2dff9b\x2d44ca\x2da748\x2d0842d90d50d2.mount: Deactivated successfully. Feb 23 19:05:47 ip-10-0-136-68 systemd[1]: run-netns-5fd49aa8\x2dff9b\x2d44ca\x2da748\x2d0842d90d50d2.mount: Deactivated successfully. Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.266314542Z" level=info msg="runSandbox: deleting pod ID f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d from idIndex" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.266352582Z" level=info msg="runSandbox: removing pod sandbox f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.266395522Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.266415209Z" level=info msg="runSandbox: unmounting shmPath for sandbox f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d-userdata-shm.mount: Deactivated successfully. Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.273306288Z" level=info msg="runSandbox: removing pod sandbox from storage: f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.274846284Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:47.274879371Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=268381cf-a25d-49d5-b7a4-d1671f809ffb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:47.275087 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:05:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:47.275146 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:05:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:47.275170 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:05:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:47.275225 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f475986c6ca448303d0593d9e0f9996d84d67a1fb2cbe7760d042e342895ce0d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:05:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:05:48.216751 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:05:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:05:48.216927 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:05:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:48.217183479Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:48.217294960Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:05:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:48.217499 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:05:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:48.222636946Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/296800c9-58ac-4fb1-be50-b4221b524e86 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:05:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:48.222670441Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:05:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:05:55.217186 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:05:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:05:55.217186 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:05:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:55.217600156Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:55.217655888Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:05:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:55.217600055Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:55.217725393Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:05:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:55.225273753Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/d34a91d0-00e3-41df-b567-277769cd2c9f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:05:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:55.225484698Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:05:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:55.225790978Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/508ff9ba-58ce-473c-af3c-e7e2a3a7a146 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:05:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:55.225821276Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:56.292512 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:56.292771 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:56.293034 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:05:56.293074 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:05:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:05:59.216937 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:59.217319724Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:59.217379188Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:59.222992380Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/80a579a5-74e9-429e-b9be-6b1a08aa19c5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:05:59.223049782Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:06:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:02.217617 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:06:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:02.218214 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:06:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:13.216732 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.217431455Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=a4ba02ee-667e-4cd6-a587-03c78325f547 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.217664883Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a4ba02ee-667e-4cd6-a587-03c78325f547 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.218239974Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=aa9c06d9-45ca-4653-ba73-e74da769b506 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.218398052Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=aa9c06d9-45ca-4653-ba73-e74da769b506 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.219021159Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d0fdc01c-5313-4742-880c-1ac2ec444bc6 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.219130149Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:06:13 ip-10-0-136-68 systemd[1]: Started crio-conmon-3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab.scope. Feb 23 19:06:13 ip-10-0-136-68 systemd[1]: Started libcontainer container 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab. Feb 23 19:06:13 ip-10-0-136-68 conmon[13456]: conmon 3fb847b5c8f698160e02 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:06:13 ip-10-0-136-68 systemd[1]: crio-conmon-3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab.scope: Deactivated successfully. Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.356219078Z" level=info msg="Created container 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d0fdc01c-5313-4742-880c-1ac2ec444bc6 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.356633014Z" level=info msg="Starting container: 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab" id=ca653957-e16b-476d-969e-a98a5ffe0bcc name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:13.374430935Z" level=info msg="Started container" PID=13468 containerID=3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=ca653957-e16b-476d-969e-a98a5ffe0bcc name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:06:13 ip-10-0-136-68 systemd[1]: crio-3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab.scope: Deactivated successfully. Feb 23 19:06:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:17.858089772Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=4a2bdbc3-6c80-44a7-8e62-6e9fc506c792 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:06:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:17.858939 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab} Feb 23 19:06:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:24.872124 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:06:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:24.872176 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:26.292829 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:26.293167 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:26.293436 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:26.293477 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.246380012Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.246431098Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 systemd[1]: run-utsns-7422ada4\x2dbea7\x2d4a5e\x2d9423\x2de34b59466a2f.mount: Deactivated successfully. Feb 23 19:06:28 ip-10-0-136-68 systemd[1]: run-ipcns-7422ada4\x2dbea7\x2d4a5e\x2d9423\x2de34b59466a2f.mount: Deactivated successfully. Feb 23 19:06:28 ip-10-0-136-68 systemd[1]: run-netns-7422ada4\x2dbea7\x2d4a5e\x2d9423\x2de34b59466a2f.mount: Deactivated successfully. Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.280358063Z" level=info msg="runSandbox: deleting pod ID e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8 from idIndex" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.280401580Z" level=info msg="runSandbox: removing pod sandbox e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.280452960Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.280471634Z" level=info msg="runSandbox: unmounting shmPath for sandbox e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8-userdata-shm.mount: Deactivated successfully. Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.285310761Z" level=info msg="runSandbox: removing pod sandbox from storage: e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.286909265Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:28.286944655Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=1d7b4221-513c-425f-a404-b89b616e6837 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:28.287194 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:06:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:28.287314 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:06:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:28.287346 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:06:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:28.287407 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7979e44fda33a60f103cab422bcee28a663e2a2c573a04e13c08005510196f8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:33.234625700Z" level=info msg="NetworkStart: stopping network for sandbox c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:33.234748750Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/296800c9-58ac-4fb1-be50-b4221b524e86 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:33.234778640Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:33.234790108Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:06:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:33.234797391Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:06:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:34.872274 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:06:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:34.872336 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:06:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:40.217738 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.218159294Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.218224189Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.223790869Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/15ff2b63-d25b-4c55-beea-1a089634880b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.223822808Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.238558083Z" level=info msg="NetworkStart: stopping network for sandbox 014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.238659372Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/d34a91d0-00e3-41df-b567-277769cd2c9f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.238698958Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.238711377Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.238721640Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.239079008Z" level=info msg="NetworkStart: stopping network for sandbox 65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.239177323Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/508ff9ba-58ce-473c-af3c-e7e2a3a7a146 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.239214152Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.239226665Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:06:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:40.239236342Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:06:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:44.234688632Z" level=info msg="NetworkStart: stopping network for sandbox 3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:06:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:44.234811064Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/80a579a5-74e9-429e-b9be-6b1a08aa19c5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:06:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:44.234852297Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:06:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:44.234863864Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:06:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:06:44.234874249Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:06:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:44.872599 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:06:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:44.872670 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:06:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:54.872650 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:06:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:06:54.872798 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:56.291960 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:56.292296 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:56.292498 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:06:56.292529 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:07:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:01.217593 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:01.217894 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:01.218136 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:01.218163 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:07:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:04.873064 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:07:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:04.873118 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:07:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:04.873147 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:07:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:04.873723 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:07:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:04.873880 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab" gracePeriod=30 Feb 23 19:07:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:04.874125192Z" level=info msg="Stopping container: 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab (timeout: 30s)" id=51b58847-67fe-4d09-8b89-64dac0da28de name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:07:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:08.634216648Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=51b58847-67fe-4d09-8b89-64dac0da28de name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:07:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1e80d58df30557ae91865da85474ebb030c131e1ce48b2ad38f3a464ba55ed7c-merged.mount: Deactivated successfully. Feb 23 19:07:08 ip-10-0-136-68 sshd[13591]: main: sshd: ssh-rsa algorithm is disabled Feb 23 19:07:10 ip-10-0-136-68 sshd[13591]: Accepted publickey for core from 10.0.182.221 port 41250 ssh2: RSA SHA256:Ez+JFROVIkSQ/eAziisgy16VY49IFSr8A84gQk7WcPc Feb 23 19:07:10 ip-10-0-136-68 systemd[1]: Created slice User Slice of UID 1000. Feb 23 19:07:10 ip-10-0-136-68 systemd[1]: Starting User Runtime Directory /run/user/1000... Feb 23 19:07:10 ip-10-0-136-68 systemd-logind[985]: New session 1 of user core. Feb 23 19:07:10 ip-10-0-136-68 systemd[1]: Finished User Runtime Directory /run/user/1000. Feb 23 19:07:10 ip-10-0-136-68 systemd[1]: Starting User Manager for UID 1000... Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: pam_unix(systemd-user:session): session opened for user core(uid=1000) by (uid=0) Feb 23 19:07:10 ip-10-0-136-68 systemd[13609]: /usr/lib/systemd/user-generators/podman-user-generator failed with exit status 1. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Queued start job for default target Main User Target. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Created slice User Application Slice. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Started Daily Cleanup of User's Temporary Directories. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Reached target Paths. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Reached target Timers. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Starting D-Bus User Message Bus Socket... Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Starting Create User's Volatile Files and Directories... Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Listening on D-Bus User Message Bus Socket. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Reached target Sockets. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Finished Create User's Volatile Files and Directories. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Reached target Basic System. Feb 23 19:07:10 ip-10-0-136-68 systemd[1]: Started User Manager for UID 1000. Feb 23 19:07:10 ip-10-0-136-68 systemd[1]: Started Session 1 of User core. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Reached target Main User Target. Feb 23 19:07:10 ip-10-0-136-68 systemd[13603]: Startup finished in 123ms. Feb 23 19:07:10 ip-10-0-136-68 sshd[13591]: pam_unix(sshd:session): session opened for user core(uid=1000) by (uid=0) Feb 23 19:07:10 ip-10-0-136-68 sudo[13622]: core : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/bash Feb 23 19:07:10 ip-10-0-136-68 sudo[13622]: pam_unix(sudo-i:session): session opened for user root(uid=0) by core(uid=1000) Feb 23 19:07:10 ip-10-0-136-68 systemd[1]: Starting Hostname Service... Feb 23 19:07:10 ip-10-0-136-68 systemd[1]: Started Hostname Service. Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.414732348Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=51b58847-67fe-4d09-8b89-64dac0da28de name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.417971543Z" level=info msg="Stopped container 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=51b58847-67fe-4d09-8b89-64dac0da28de name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.418702845Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=a87aacf0-dd19-4f5d-b722-c5a5522f0c3f name=/runtime.v1.ImageService/ImageStatus Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.418934735Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a87aacf0-dd19-4f5d-b722-c5a5522f0c3f name=/runtime.v1.ImageService/ImageStatus Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.419522233Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=bff616e3-f28a-4c02-b0c8-5012597cbaf7 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.419651861Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=bff616e3-f28a-4c02-b0c8-5012597cbaf7 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.420482064Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=09b91216-b81f-482a-b9ca-a0e699801524 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.420571433Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:07:12 ip-10-0-136-68 systemd[1]: Started crio-conmon-90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742.scope. Feb 23 19:07:12 ip-10-0-136-68 systemd[1]: Started libcontainer container 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742. Feb 23 19:07:12 ip-10-0-136-68 conmon[13676]: conmon 90ab87dda457b2284c25 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:07:12 ip-10-0-136-68 systemd[1]: crio-conmon-90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742.scope: Deactivated successfully. Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.543196719Z" level=info msg="Created container 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=09b91216-b81f-482a-b9ca-a0e699801524 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.543600607Z" level=info msg="Starting container: 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" id=ff7bb9fe-fcaf-428c-bf48-98a02a7fcdd6 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.550627830Z" level=info msg="Started container" PID=13688 containerID=90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=ff7bb9fe-fcaf-428c-bf48-98a02a7fcdd6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:07:12 ip-10-0-136-68 systemd[1]: crio-90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742.scope: Deactivated successfully. Feb 23 19:07:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:12.700395469Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=13e6cf79-5b10-4374-aa04-17ab4cd4da62 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:07:15 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 19:07:15 ip-10-0-136-68 rpm-ostree[13736]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 19:07:15 ip-10-0-136-68 rpm-ostree[13736]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 19:07:15 ip-10-0-136-68 rpm-ostree[13736]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 19:07:15 ip-10-0-136-68 rpm-ostree[13736]: In idle state; will auto-exit in 61 seconds Feb 23 19:07:15 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 19:07:15 ip-10-0-136-68 rpm-ostree[13736]: client(id:cli dbus:1.271 unit:session-1.scope uid:0) added; new total=1 Feb 23 19:07:15 ip-10-0-136-68 rpm-ostree[13736]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 19:07:15 ip-10-0-136-68 rpm-ostree[13736]: client(id:cli dbus:1.271 unit:session-1.scope uid:0) vanished; remaining=0 Feb 23 19:07:15 ip-10-0-136-68 rpm-ostree[13736]: In idle state; will auto-exit in 63 seconds Feb 23 19:07:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:16.451438370Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=afc5e834-5c96-44bc-beec-ad39cb605cdd name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:07:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:16.452518 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab" exitCode=-1 Feb 23 19:07:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:16.452561 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab} Feb 23 19:07:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:16.452751 2199 scope.go:115] "RemoveContainer" containerID="ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.244617610Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.244667024Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 systemd[1]: run-utsns-296800c9\x2d58ac\x2d4fb1\x2dbe50\x2db4221b524e86.mount: Deactivated successfully. Feb 23 19:07:18 ip-10-0-136-68 systemd[1]: run-ipcns-296800c9\x2d58ac\x2d4fb1\x2dbe50\x2db4221b524e86.mount: Deactivated successfully. Feb 23 19:07:18 ip-10-0-136-68 systemd[1]: run-netns-296800c9\x2d58ac\x2d4fb1\x2dbe50\x2db4221b524e86.mount: Deactivated successfully. Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.262362132Z" level=info msg="runSandbox: deleting pod ID c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa from idIndex" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.262393853Z" level=info msg="runSandbox: removing pod sandbox c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.262419088Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.262436134Z" level=info msg="runSandbox: unmounting shmPath for sandbox c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa-userdata-shm.mount: Deactivated successfully. Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.271331932Z" level=info msg="runSandbox: removing pod sandbox from storage: c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.272893350Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:18.272922175Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1e65060a-01c2-48e0-b2e8-3ed360975ae1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:18.273106 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:07:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:18.273158 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:07:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:18.273186 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:07:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:18.273239 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(c4fd0ed27eade299a0aa9190187512f3c2b585bd647e0359320e16a42d45e3fa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:07:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:20.201931102Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=96c2a0bd-7146-435c-a686-e4c396dbdabf name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:07:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:21.217445348Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=e2ec7074-9fa4-4a93-b158-e2e1487f084e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:07:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:23.951999855Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=2790a828-2ae3-4afa-a579-df5edade936c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:07:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:23.952610930Z" level=info msg="Removing container: ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2" id=040dd3d3-e53d-4321-8640-4a0014f4e809 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:07:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:24.966996842Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=e8d04e47-f227-4881-910a-eb543f56e8ff name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:07:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:24.968031 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742} Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.235607556Z" level=info msg="NetworkStart: stopping network for sandbox a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.235729510Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/15ff2b63-d25b-4c55-beea-1a089634880b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.235758843Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.235770103Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.235777880Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.248236954Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.248307575Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.248517686Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.248549374Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 systemd[1]: run-utsns-508ff9ba\x2d58ce\x2d473c\x2daf3c\x2de7e2a3a7a146.mount: Deactivated successfully. Feb 23 19:07:25 ip-10-0-136-68 systemd[1]: run-utsns-d34a91d0\x2d00e3\x2d41df\x2db567\x2d277769cd2c9f.mount: Deactivated successfully. Feb 23 19:07:25 ip-10-0-136-68 systemd[1]: run-ipcns-508ff9ba\x2d58ce\x2d473c\x2daf3c\x2de7e2a3a7a146.mount: Deactivated successfully. Feb 23 19:07:25 ip-10-0-136-68 systemd[1]: run-ipcns-d34a91d0\x2d00e3\x2d41df\x2db567\x2d277769cd2c9f.mount: Deactivated successfully. Feb 23 19:07:25 ip-10-0-136-68 systemd[1]: run-netns-508ff9ba\x2d58ce\x2d473c\x2daf3c\x2de7e2a3a7a146.mount: Deactivated successfully. Feb 23 19:07:25 ip-10-0-136-68 systemd[1]: run-netns-d34a91d0\x2d00e3\x2d41df\x2db567\x2d277769cd2c9f.mount: Deactivated successfully. Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.269346510Z" level=info msg="runSandbox: deleting pod ID 65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927 from idIndex" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.269392815Z" level=info msg="runSandbox: removing pod sandbox 65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.269373511Z" level=info msg="runSandbox: deleting pod ID 014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6 from idIndex" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.269432793Z" level=info msg="runSandbox: removing pod sandbox 014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.269471859Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.269488777Z" level=info msg="runSandbox: unmounting shmPath for sandbox 65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.269495554Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.269536230Z" level=info msg="runSandbox: unmounting shmPath for sandbox 014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927-userdata-shm.mount: Deactivated successfully. Feb 23 19:07:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6-userdata-shm.mount: Deactivated successfully. Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.278298152Z" level=info msg="runSandbox: removing pod sandbox from storage: 014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.278407108Z" level=info msg="runSandbox: removing pod sandbox from storage: 65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.279928693Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.279956014Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8043d1d9-7eb8-47ee-a3e4-314b3f67b2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:25.280167 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:25.280346 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:25.280386 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:25.280464 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(014c08534ed0765c541339241d8dc7d21201fd3a20a40f04af10fe0cf201a2a6): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.281584479Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:25.281614843Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=87bc5d38-36b8-4004-b32c-8f4d277c90cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:25.281789 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:25.281833 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:25.281854 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:25.281908 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(65f54023129e995ec9d54ffd01bad872d6875a260d53e2e375cfb3294bbf6927): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:26.291825 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:26.292147 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:26.292404 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:26.292430 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:07:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:27.702131163Z" level=warning msg="Failed to find container exit file for ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: timed out waiting for the condition" id=040dd3d3-e53d-4321-8640-4a0014f4e809 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:07:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:27.715846191Z" level=info msg="Removed container ca58b4954068c8d41522c57668c94f1c38badae07372357f30f411ebf47158b2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=040dd3d3-e53d-4321-8640-4a0014f4e809 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.244001277Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.244050281Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 systemd[1]: run-utsns-80a579a5\x2d74e9\x2d429e\x2db9be\x2d6b1a08aa19c5.mount: Deactivated successfully. Feb 23 19:07:29 ip-10-0-136-68 systemd[1]: run-ipcns-80a579a5\x2d74e9\x2d429e\x2db9be\x2d6b1a08aa19c5.mount: Deactivated successfully. Feb 23 19:07:29 ip-10-0-136-68 systemd[1]: run-netns-80a579a5\x2d74e9\x2d429e\x2db9be\x2d6b1a08aa19c5.mount: Deactivated successfully. Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.266329608Z" level=info msg="runSandbox: deleting pod ID 3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288 from idIndex" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.266362846Z" level=info msg="runSandbox: removing pod sandbox 3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.266393998Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.266406534Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288-userdata-shm.mount: Deactivated successfully. Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.272334947Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.273990781Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:29.274023555Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=14f5fff4-7a71-4894-a45c-f18cd35bc985 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:29.274214 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:07:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:29.274304 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:07:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:29.274330 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:07:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:29.274393 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3d695f4db28cf46d99810188441a98abbead5289e9ed4a4320f89fe5b98d5288): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:07:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:30.216874 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:07:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:30.217296296Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:30.217525596Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:07:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:30.223716489Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/9eecfd98-fe52-4118-a464-034c3f709a97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:07:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:30.223742493Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:31.735026921Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=5865911a-1d86-4437-b58f-82e53ad281f6 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:07:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:34.872426 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:07:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:34.872486 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:07:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:40.217323 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:07:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:40.217439 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:07:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:40.217761147Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:40.217825230Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:07:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:40.217765227Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:40.217887400Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:07:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:40.225719760Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/cb9ad857-9d88-4c4a-a017-f8fbc7cf3bb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:07:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:40.225751685Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:07:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:40.225724625Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/d4e6265f-9cb8-4749-be1c-d1914adedecc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:07:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:40.225982952Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:07:40 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 19:07:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:44.217001 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:07:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:44.217676475Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:07:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:44.217746296Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:07:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:44.224083513Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/cb3a720d-5a8f-4333-93f7-fd467e4e5f47 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:07:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:07:44.224111461Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:07:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:44.872504 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:07:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:44.872561 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:07:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:54.873025 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:07:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:07:54.873084 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:56.292100 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:56.292381 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:56.292582 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:07:56.292622 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:08:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:03.217327 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:03.217682 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:03.217968 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:03.218010 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:08:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:04.881624 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:08:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:04.881686 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.246495905Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.246540155Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 systemd[1]: run-utsns-15ff2b63\x2dd25b\x2d4c55\x2dbeea\x2d1a089634880b.mount: Deactivated successfully. Feb 23 19:08:10 ip-10-0-136-68 systemd[1]: run-ipcns-15ff2b63\x2dd25b\x2d4c55\x2dbeea\x2d1a089634880b.mount: Deactivated successfully. Feb 23 19:08:10 ip-10-0-136-68 systemd[1]: run-netns-15ff2b63\x2dd25b\x2d4c55\x2dbeea\x2d1a089634880b.mount: Deactivated successfully. Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.269339687Z" level=info msg="runSandbox: deleting pod ID a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5 from idIndex" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.269384681Z" level=info msg="runSandbox: removing pod sandbox a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.269431924Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.269454183Z" level=info msg="runSandbox: unmounting shmPath for sandbox a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5-userdata-shm.mount: Deactivated successfully. Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.274314475Z" level=info msg="runSandbox: removing pod sandbox from storage: a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.275883711Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:10.275913663Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d268a9c0-5dd4-46e3-b4cf-7ab2216cd1d2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:10.276117 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:08:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:10.276173 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:08:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:10.276201 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:08:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:10.276295 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a71afa4903e010061ab94fb9bc4a5df20369884c6a92620f593155de7fa0c2b5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:08:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:14.872585 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:08:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:14.872650 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:08:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:14.872675 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:08:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:14.873207 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:08:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:14.873411 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" gracePeriod=30 Feb 23 19:08:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:14.873638653Z" level=info msg="Stopping container: 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742 (timeout: 30s)" id=34d16032-81f0-464e-bbfa-652313595129 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:08:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:15.236120550Z" level=info msg="NetworkStart: stopping network for sandbox 4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:15.236281030Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/9eecfd98-fe52-4118-a464-034c3f709a97 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:08:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:15.236316543Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:08:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:15.236327263Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:08:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:15.236336614Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:08:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:18.635080713Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=34d16032-81f0-464e-bbfa-652313595129 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:08:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e271fbd757df2f610d9f7e1b4516f58e2e9649b957a02c1ed3dac24677b32510-merged.mount: Deactivated successfully. Feb 23 19:08:19 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Deactivated successfully. Feb 23 19:08:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:21.216431 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:08:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:21.216862157Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:21.217109853Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:08:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:21.223405637Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/78a1c064-05b1-44c8-9241-4a6d351c2fd3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:08:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:21.223441823Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:08:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:22.433972713Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=34d16032-81f0-464e-bbfa-652313595129 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:08:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:22.436669517Z" level=info msg="Stopped container 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=34d16032-81f0-464e-bbfa-652313595129 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:08:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:22.437275 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:08:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:22.555235798Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=84c92d0f-a49f-4689-a166-762c1f896954 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.241989718Z" level=info msg="NetworkStart: stopping network for sandbox 3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242073749Z" level=info msg="NetworkStart: stopping network for sandbox 506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242128592Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/cb9ad857-9d88-4c4a-a017-f8fbc7cf3bb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242168225Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/d4e6265f-9cb8-4749-be1c-d1914adedecc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242170602Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242215553Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242229265Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242202555Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242323325Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:08:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:25.242331758Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:26.292479 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:26.292755 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:26.292987 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:26.293024 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:08:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:26.304934472Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=6eb96218-0f64-4828-a6e1-1922f1893690 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:08:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:26.305665 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" exitCode=-1 Feb 23 19:08:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:26.305700 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742} Feb 23 19:08:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:26.305732 2199 scope.go:115] "RemoveContainer" containerID="3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab" Feb 23 19:08:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:27.307199 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:08:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:27.307641 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:08:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:29.235706460Z" level=info msg="NetworkStart: stopping network for sandbox 0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:08:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:29.235842684Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/cb3a720d-5a8f-4333-93f7-fd467e4e5f47 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:08:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:29.235871853Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:08:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:29.235882066Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:08:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:29.235892009Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:08:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:30.066140176Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=2af62e0f-ec48-46f1-a29f-d0973dd9b055 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:08:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:33.816062866Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=d937be5a-102a-49b8-a6ac-366af4de4299 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:08:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:33.816641711Z" level=info msg="Removing container: 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab" id=0096cc80-33d3-4859-9365-7665b8bb544a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:08:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:37.579182911Z" level=warning msg="Failed to find container exit file for 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: timed out waiting for the condition" id=0096cc80-33d3-4859-9365-7665b8bb544a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:08:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:37.604481891Z" level=info msg="Removed container 3fb847b5c8f698160e020deb7787a3ddaab0f6df197a1d46c78923a2acab37ab: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=0096cc80-33d3-4859-9365-7665b8bb544a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:08:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:39.216707 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:08:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:39.217238 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:08:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:08:42.085100786Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=d2b26461-3a25-4a09-addc-925c96678c5c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:08:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:08:50.217745 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:08:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:50.218352 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:56.292225 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:56.292525 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:56.292772 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:08:56.292799 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.246127746Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.246174343Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 systemd[1]: run-utsns-9eecfd98\x2dfe52\x2d4118\x2da464\x2d034c3f709a97.mount: Deactivated successfully. Feb 23 19:09:00 ip-10-0-136-68 systemd[1]: run-ipcns-9eecfd98\x2dfe52\x2d4118\x2da464\x2d034c3f709a97.mount: Deactivated successfully. Feb 23 19:09:00 ip-10-0-136-68 systemd[1]: run-netns-9eecfd98\x2dfe52\x2d4118\x2da464\x2d034c3f709a97.mount: Deactivated successfully. Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.270335167Z" level=info msg="runSandbox: deleting pod ID 4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a from idIndex" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.270378411Z" level=info msg="runSandbox: removing pod sandbox 4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.270410246Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.270421863Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a-userdata-shm.mount: Deactivated successfully. Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.275319704Z" level=info msg="runSandbox: removing pod sandbox from storage: 4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.276895453Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:00.276924863Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=9b090614-2e4f-4e2b-b45e-bfe7cd347322 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:00.277154 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:09:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:00.277214 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:09:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:00.277239 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:09:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:00.277337 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(4c85d6b401c6df86fd68faa1fd523e61e8b5076e05a908f91a5ad8d9e81cfb7a): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:09:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:01.216955 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:09:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:01.217385 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:09:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:06.237047961Z" level=info msg="NetworkStart: stopping network for sandbox 1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:06.237165728Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/78a1c064-05b1-44c8-9241-4a6d351c2fd3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:09:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:06.237194148Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:09:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:06.237202092Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:09:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:06.237209146Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.251814551Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.251868787Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.252351623Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.252413411Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 systemd[1]: run-utsns-cb9ad857\x2d9d88\x2d4c4a\x2da017\x2df8fbc7cf3bb7.mount: Deactivated successfully. Feb 23 19:09:10 ip-10-0-136-68 systemd[1]: run-utsns-d4e6265f\x2d9cb8\x2d4749\x2dbe1c\x2dd1914adedecc.mount: Deactivated successfully. Feb 23 19:09:10 ip-10-0-136-68 systemd[1]: run-ipcns-cb9ad857\x2d9d88\x2d4c4a\x2da017\x2df8fbc7cf3bb7.mount: Deactivated successfully. Feb 23 19:09:10 ip-10-0-136-68 systemd[1]: run-ipcns-d4e6265f\x2d9cb8\x2d4749\x2dbe1c\x2dd1914adedecc.mount: Deactivated successfully. Feb 23 19:09:10 ip-10-0-136-68 systemd[1]: run-netns-cb9ad857\x2d9d88\x2d4c4a\x2da017\x2df8fbc7cf3bb7.mount: Deactivated successfully. Feb 23 19:09:10 ip-10-0-136-68 systemd[1]: run-netns-d4e6265f\x2d9cb8\x2d4749\x2dbe1c\x2dd1914adedecc.mount: Deactivated successfully. Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.273322780Z" level=info msg="runSandbox: deleting pod ID 506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf from idIndex" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.273347976Z" level=info msg="runSandbox: deleting pod ID 3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027 from idIndex" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.273386445Z" level=info msg="runSandbox: removing pod sandbox 3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.273416287Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.273430038Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.273365973Z" level=info msg="runSandbox: removing pod sandbox 506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.273490376Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.273510180Z" level=info msg="runSandbox: unmounting shmPath for sandbox 506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027-userdata-shm.mount: Deactivated successfully. Feb 23 19:09:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf-userdata-shm.mount: Deactivated successfully. Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.278344465Z" level=info msg="runSandbox: removing pod sandbox from storage: 506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.278349910Z" level=info msg="runSandbox: removing pod sandbox from storage: 3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.279947842Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.279980662Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=b0b1ade4-6ea2-431f-89df-162ce474ddc7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:10.280304 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:09:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:10.280378 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:09:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:10.280418 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:09:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:10.280488 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(506752f9ede730393edc58b82c2af7781cc105a4577e8dca917c2e00d7c729cf): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.281595791Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:10.281624954Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=230a4c90-20c9-4bdf-abb8-dcca959b4ea3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:10.281773 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:09:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:10.281815 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:09:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:10.281838 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:09:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:10.281889 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3b733e232bde31ee346f921159e245cf78705f1b8c56cd7be5129727a1e4c027): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:09:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:13.216740 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:09:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:13.217139049Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:13.217207850Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:09:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:13.222847575Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/80ae916e-26df-4d5c-82e1-078483fdaeda Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:09:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:13.222882345Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:09:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:14.217635 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:09:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:14.218214 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.245707490Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.245763835Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 systemd[1]: run-utsns-cb3a720d\x2d5a8f\x2d4333\x2d93f7\x2dfd467e4e5f47.mount: Deactivated successfully. Feb 23 19:09:14 ip-10-0-136-68 systemd[1]: run-ipcns-cb3a720d\x2d5a8f\x2d4333\x2d93f7\x2dfd467e4e5f47.mount: Deactivated successfully. Feb 23 19:09:14 ip-10-0-136-68 systemd[1]: run-netns-cb3a720d\x2d5a8f\x2d4333\x2d93f7\x2dfd467e4e5f47.mount: Deactivated successfully. Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.275343185Z" level=info msg="runSandbox: deleting pod ID 0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7 from idIndex" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.275380657Z" level=info msg="runSandbox: removing pod sandbox 0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.275421994Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.275436859Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7-userdata-shm.mount: Deactivated successfully. Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.279312850Z" level=info msg="runSandbox: removing pod sandbox from storage: 0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.280773407Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:14.280803730Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=7e774b14-a731-4771-a078-0ffe43723bd4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:14.281013 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:09:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:14.281073 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:09:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:14.281098 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:09:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:14.281161 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(0d4756088a60fd457ef5d708e3a7ee5ed0e6c249d4492825d24488db0ca504d7): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:23.216951 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:23.217386 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:23.217589 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:23.217636 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:09:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:24.217460 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:24.217850496Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:24.217913822Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:24.223684035Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/6d7d7694-c989-45f2-9b17-8dc030703e0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:24.223710149Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:09:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:25.216663 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:09:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:25.217072394Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:25.217124160Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:09:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:25.223069449Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f8b84179-fdfd-452e-8f4f-e1227fac98e2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:09:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:25.223107267Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:09:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:26.217025 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:26.217471 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:26.292224 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:26.292519 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:26.292744 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:26.292780 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:09:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:29.216773 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:09:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:29.217099592Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:29.217164854Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:09:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:29.223266774Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/f5246508-0805-44d6-8484-7f3a494097a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:09:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:29.223301438Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:09:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:38.216967 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:09:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:38.217580 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.247278210Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.247323880Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 systemd[1]: run-utsns-78a1c064\x2d05b1\x2d44c8\x2d9241\x2d4a6d351c2fd3.mount: Deactivated successfully. Feb 23 19:09:51 ip-10-0-136-68 systemd[1]: run-ipcns-78a1c064\x2d05b1\x2d44c8\x2d9241\x2d4a6d351c2fd3.mount: Deactivated successfully. Feb 23 19:09:51 ip-10-0-136-68 systemd[1]: run-netns-78a1c064\x2d05b1\x2d44c8\x2d9241\x2d4a6d351c2fd3.mount: Deactivated successfully. Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.269331677Z" level=info msg="runSandbox: deleting pod ID 1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5 from idIndex" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.269365601Z" level=info msg="runSandbox: removing pod sandbox 1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.269390987Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.269427995Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5-userdata-shm.mount: Deactivated successfully. Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.279305950Z" level=info msg="runSandbox: removing pod sandbox from storage: 1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.280815990Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:51.280846570Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=87a479f8-06cb-4773-8772-b6c20038d11b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:51.281019 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:09:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:51.281068 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:09:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:51.281095 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:09:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:51.281152 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1eb5555423dd2df0202145e41217819dcd1b5f694f82c8644b62c7a7e0af5df5): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:09:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:09:53.217009 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:09:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:53.217479 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:56.292335 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:56.292596 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:56.292793 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:09:56.292821 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:58.236671704Z" level=info msg="NetworkStart: stopping network for sandbox bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:58.236780553Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/80ae916e-26df-4d5c-82e1-078483fdaeda Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:58.236808131Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:58.236815285Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:09:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:09:58.236821926Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:10:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:10:05.216319 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:10:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:05.216615423Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:05.216679506Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:10:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:05.222476327Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/2f4921f2-d838-4d6b-92e5-1ea9b932b9f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:10:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:05.222512722Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:10:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:10:06.216986 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:10:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:06.217540 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:10:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:09.235623807Z" level=info msg="NetworkStart: stopping network for sandbox ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:09.235739397Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/6d7d7694-c989-45f2-9b17-8dc030703e0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:10:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:09.235766902Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:10:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:09.235774011Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:10:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:09.235780102Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:10:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:10.236766887Z" level=info msg="NetworkStart: stopping network for sandbox 4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:10.236909386Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f8b84179-fdfd-452e-8f4f-e1227fac98e2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:10:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:10.236953140Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:10:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:10.236964398Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:10:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:10.236973786Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:10:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:14.237175051Z" level=info msg="NetworkStart: stopping network for sandbox b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:14.237312346Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/f5246508-0805-44d6-8484-7f3a494097a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:10:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:14.237340867Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:10:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:14.237349649Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:10:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:14.237358938Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:10:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:20.216656105Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=79650f97-e32f-4091-8d8c-1f0ed65db0ad name=/runtime.v1.ImageService/ImageStatus Feb 23 19:10:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:20.216882701Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=79650f97-e32f-4091-8d8c-1f0ed65db0ad name=/runtime.v1.ImageService/ImageStatus Feb 23 19:10:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:10:21.216578 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:10:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:21.216987 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:26.292660 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:26.292880 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:26.293098 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:26.293133 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:10:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:29.216662 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:29.216960 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:29.217183 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:29.217225 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:10:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:10:33.217378 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:10:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:33.217942 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.246780263Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.246830977Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 systemd[1]: run-utsns-80ae916e\x2d26df\x2d4d5c\x2d82e1\x2d078483fdaeda.mount: Deactivated successfully. Feb 23 19:10:43 ip-10-0-136-68 systemd[1]: run-ipcns-80ae916e\x2d26df\x2d4d5c\x2d82e1\x2d078483fdaeda.mount: Deactivated successfully. Feb 23 19:10:43 ip-10-0-136-68 systemd[1]: run-netns-80ae916e\x2d26df\x2d4d5c\x2d82e1\x2d078483fdaeda.mount: Deactivated successfully. Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.265326403Z" level=info msg="runSandbox: deleting pod ID bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84 from idIndex" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.265360833Z" level=info msg="runSandbox: removing pod sandbox bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.265383653Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.265397413Z" level=info msg="runSandbox: unmounting shmPath for sandbox bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84-userdata-shm.mount: Deactivated successfully. Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.271312503Z" level=info msg="runSandbox: removing pod sandbox from storage: bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.272855000Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:43.272883923Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=94899e22-4b21-4464-8484-cf100a8afed1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:43.273064 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:10:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:43.273115 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:10:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:43.273143 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:10:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:43.273199 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(bff6dc01f6bf8fb790f889436f4363af4e329030cf21a1847c8c13e88839fa84): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:10:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:10:45.216785 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:10:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:45.217146 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:50.235727480Z" level=info msg="NetworkStart: stopping network for sandbox 48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:50.235834628Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/2f4921f2-d838-4d6b-92e5-1ea9b932b9f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:50.235862616Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:50.235869396Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:10:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:50.235876623Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:10:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:10:54.216722 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.217191705Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.217299906Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.223918989Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4e0530d5-a1a1-4ab2-bb06-e1c8efa38a47 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.223954689Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.245699039Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.245739824Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 systemd[1]: run-utsns-6d7d7694\x2dc989\x2d45f2\x2d9b17\x2d8dc030703e0b.mount: Deactivated successfully. Feb 23 19:10:54 ip-10-0-136-68 systemd[1]: run-ipcns-6d7d7694\x2dc989\x2d45f2\x2d9b17\x2d8dc030703e0b.mount: Deactivated successfully. Feb 23 19:10:54 ip-10-0-136-68 systemd[1]: run-netns-6d7d7694\x2dc989\x2d45f2\x2d9b17\x2d8dc030703e0b.mount: Deactivated successfully. Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.265335078Z" level=info msg="runSandbox: deleting pod ID ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42 from idIndex" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.265370394Z" level=info msg="runSandbox: removing pod sandbox ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.265394670Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.265407220Z" level=info msg="runSandbox: unmounting shmPath for sandbox ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.269301665Z" level=info msg="runSandbox: removing pod sandbox from storage: ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.270683277Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:54.270709560Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ee9c9053-44ea-4d9a-b2ca-1d653b65de3d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:54.270904 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:10:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:54.270973 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:10:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:54.271010 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:10:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:54.271092 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:10:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ec19ec1c3eaf352151c26e9fc97f5c3a2970f30751a1da637238ec2a33c49d42-userdata-shm.mount: Deactivated successfully. Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.246751090Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.246805330Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 systemd[1]: run-utsns-f8b84179\x2dfdfd\x2d452e\x2d8f4f\x2de1227fac98e2.mount: Deactivated successfully. Feb 23 19:10:55 ip-10-0-136-68 systemd[1]: run-ipcns-f8b84179\x2dfdfd\x2d452e\x2d8f4f\x2de1227fac98e2.mount: Deactivated successfully. Feb 23 19:10:55 ip-10-0-136-68 systemd[1]: run-netns-f8b84179\x2dfdfd\x2d452e\x2d8f4f\x2de1227fac98e2.mount: Deactivated successfully. Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.264346556Z" level=info msg="runSandbox: deleting pod ID 4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6 from idIndex" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.264389124Z" level=info msg="runSandbox: removing pod sandbox 4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.264438966Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.264453735Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6-userdata-shm.mount: Deactivated successfully. Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.270321065Z" level=info msg="runSandbox: removing pod sandbox from storage: 4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.271927223Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:55.271955708Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=5717b9c8-c105-4962-869f-9dd861677d2d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:55.272193 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:10:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:55.272293 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:10:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:55.272330 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:10:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:55.272406 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(4633fb0c499543b447aea4f41acbd563e9682caa169735086127bf8da65b3ab6): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:56.291872 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:56.292147 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:56.292418 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:56.292449 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:10:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:10:58.217007 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:10:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:58.217614 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.247869693Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.247914717Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 systemd[1]: run-utsns-f5246508\x2d0805\x2d44d6\x2d8484\x2d7f3a494097a4.mount: Deactivated successfully. Feb 23 19:10:59 ip-10-0-136-68 systemd[1]: run-ipcns-f5246508\x2d0805\x2d44d6\x2d8484\x2d7f3a494097a4.mount: Deactivated successfully. Feb 23 19:10:59 ip-10-0-136-68 systemd[1]: run-netns-f5246508\x2d0805\x2d44d6\x2d8484\x2d7f3a494097a4.mount: Deactivated successfully. Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.277333079Z" level=info msg="runSandbox: deleting pod ID b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904 from idIndex" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.277372821Z" level=info msg="runSandbox: removing pod sandbox b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.277416842Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.277432283Z" level=info msg="runSandbox: unmounting shmPath for sandbox b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904-userdata-shm.mount: Deactivated successfully. Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.282316281Z" level=info msg="runSandbox: removing pod sandbox from storage: b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.283850362Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:10:59.283879536Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=3116f58d-c37a-4273-b60c-3eb7c3a7318f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:10:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:59.284095 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:10:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:59.284153 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:10:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:59.284178 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:10:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:10:59.284285 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b8bf48c19a58f86fa74e521a826c35c1678e24189f104ddaeedfe636f5e97904): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:11:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:11:05.217367 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:11:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:05.217767328Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:05.217823827Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:11:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:05.223127048Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c1c195fb-8286-4fc2-8ca3-062fd5c232a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:11:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:05.223155169Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:11:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:11:10.217371 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:11:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:10.217792704Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:10.217863708Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:11:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:11:10.218214 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:11:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:10.218528729Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:10.218577650Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:11:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:10.225836703Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/fbc853d6-9011-45d6-8041-4ee6aaaf3cb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:11:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:10.225948127Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:11:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:10.226365659Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/4a611f23-068b-4536-b53b-28bdf895550e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:11:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:10.226499272Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:11:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:11:12.216646 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:11:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:12.217060 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:11:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:11:26.217473 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:26.218053 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:26.292228 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:26.292481 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:26.292701 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:26.292744 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:26.630823 2199 kubelet_node_status.go:567] "Error updating node status, will retry" err="error getting node \"ip-10-0-136-68.us-west-2.compute.internal\": Get \"https://api-int.mnguyen-rt.devcluster.openshift.com:6443/api/v1/nodes/ip-10-0-136-68.us-west-2.compute.internal?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 19:11:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:34.217141 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:34.217452 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:34.217639 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:34.217671 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.245779682Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.245834267Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 systemd[1]: run-utsns-2f4921f2\x2dd838\x2d4d6b\x2d92e5\x2d1ea9b932b9f2.mount: Deactivated successfully. Feb 23 19:11:35 ip-10-0-136-68 systemd[1]: run-ipcns-2f4921f2\x2dd838\x2d4d6b\x2d92e5\x2d1ea9b932b9f2.mount: Deactivated successfully. Feb 23 19:11:35 ip-10-0-136-68 systemd[1]: run-netns-2f4921f2\x2dd838\x2d4d6b\x2d92e5\x2d1ea9b932b9f2.mount: Deactivated successfully. Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.265367267Z" level=info msg="runSandbox: deleting pod ID 48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a from idIndex" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.265410380Z" level=info msg="runSandbox: removing pod sandbox 48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.265437433Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.265452275Z" level=info msg="runSandbox: unmounting shmPath for sandbox 48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a-userdata-shm.mount: Deactivated successfully. Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.270327685Z" level=info msg="runSandbox: removing pod sandbox from storage: 48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.271951383Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:35.271988940Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=a4fb8d0d-f07b-4c9e-a927-1489b6e28bed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:35.272237 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:11:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:35.272378 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:11:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:35.272404 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:11:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:35.272466 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(48204a70daec5cf63ec9966e5399b723439c9ffbf6d0b45adba3fc9b8636e93a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:11:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:39.237238034Z" level=info msg="NetworkStart: stopping network for sandbox 52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:39.237380893Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4e0530d5-a1a1-4ab2-bb06-e1c8efa38a47 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:11:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:39.237414350Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:11:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:39.237423274Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:11:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:39.237430342Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:11:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:11:41.216954 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:11:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:41.217533 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:11:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:11:49.216696 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:11:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:49.217143556Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:49.217416298Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:11:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:49.223580209Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/cbdf9604-b01c-40a8-b781-eafdcd9269bb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:11:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:49.223620096Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:11:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:50.236205475Z" level=info msg="NetworkStart: stopping network for sandbox 0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:50.236347928Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c1c195fb-8286-4fc2-8ca3-062fd5c232a6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:11:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:50.236377493Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:11:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:50.236387810Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:11:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:50.236397038Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:11:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:11:54.217461 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:11:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:54.218065 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242265220Z" level=info msg="NetworkStart: stopping network for sandbox 5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242269731Z" level=info msg="NetworkStart: stopping network for sandbox d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242409589Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/4a611f23-068b-4536-b53b-28bdf895550e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242424958Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/fbc853d6-9011-45d6-8041-4ee6aaaf3cb7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242450533Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242463372Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242473371Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242476023Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242576040Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:11:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:11:55.242584315Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:56.292404 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:56.292745 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:56.292963 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:11:56.293007 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:12:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:06.216579 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:12:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:06.217164 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:12:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:17.217385 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:12:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:17.217847 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.246600063Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.246643512Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 systemd[1]: run-utsns-4e0530d5\x2da1a1\x2d4ab2\x2dbb06\x2de1c8efa38a47.mount: Deactivated successfully. Feb 23 19:12:24 ip-10-0-136-68 systemd[13603]: Created slice User Background Tasks Slice. Feb 23 19:12:24 ip-10-0-136-68 systemd[13603]: Starting Cleanup of User's Temporary Files and Directories... Feb 23 19:12:24 ip-10-0-136-68 systemd[1]: run-ipcns-4e0530d5\x2da1a1\x2d4ab2\x2dbb06\x2de1c8efa38a47.mount: Deactivated successfully. Feb 23 19:12:24 ip-10-0-136-68 systemd[13603]: Finished Cleanup of User's Temporary Files and Directories. Feb 23 19:12:24 ip-10-0-136-68 systemd[1]: run-netns-4e0530d5\x2da1a1\x2d4ab2\x2dbb06\x2de1c8efa38a47.mount: Deactivated successfully. Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.265394802Z" level=info msg="runSandbox: deleting pod ID 52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7 from idIndex" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.265437191Z" level=info msg="runSandbox: removing pod sandbox 52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.265474474Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.265497857Z" level=info msg="runSandbox: unmounting shmPath for sandbox 52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7-userdata-shm.mount: Deactivated successfully. Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.272307621Z" level=info msg="runSandbox: removing pod sandbox from storage: 52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.273877916Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:24.273906439Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=46f5a4b2-3eb2-438e-82bf-ce8aace44b5e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:24.274128 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:12:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:24.274183 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:12:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:24.274210 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:12:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:24.274351 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(52c5b3388d3bf0804581439086bc3900a21f35737b9a8266ebbc65223ea0fce7): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:26.292289 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:26.292582 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:26.292785 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:26.292810 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:12:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:31.217207 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:12:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:31.217698 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:12:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:34.236974918Z" level=info msg="NetworkStart: stopping network for sandbox 0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:34.237093578Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/cbdf9604-b01c-40a8-b781-eafdcd9269bb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:12:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:34.237124088Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:12:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:34.237132189Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:12:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:34.237138678Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:12:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:35.216923 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.217359840Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.217428071Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.223315572Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/9a6a6ad9-dffc-4548-bb5c-ba27293d8fe7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.223341786Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.246034992Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.246075817Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 systemd[1]: run-utsns-c1c195fb\x2d8286\x2d4fc2\x2d8ca3\x2d062fd5c232a6.mount: Deactivated successfully. Feb 23 19:12:35 ip-10-0-136-68 systemd[1]: run-ipcns-c1c195fb\x2d8286\x2d4fc2\x2d8ca3\x2d062fd5c232a6.mount: Deactivated successfully. Feb 23 19:12:35 ip-10-0-136-68 systemd[1]: run-netns-c1c195fb\x2d8286\x2d4fc2\x2d8ca3\x2d062fd5c232a6.mount: Deactivated successfully. Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.261325499Z" level=info msg="runSandbox: deleting pod ID 0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b from idIndex" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.261353445Z" level=info msg="runSandbox: removing pod sandbox 0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.261381052Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.261394237Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.266321684Z" level=info msg="runSandbox: removing pod sandbox from storage: 0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.267701240Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:35.267731916Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=15667066-94cb-474c-bced-026fc23e4612 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:35.267933 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:12:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:35.267997 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:12:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:35.268034 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:12:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:35.268110 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:12:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0fd295bbdb552d424e1a03c60cf3cb726715aa17ba8106aa20469ec0a37d3b4b-userdata-shm.mount: Deactivated successfully. Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.255400639Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.255455175Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.255405333Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.255572093Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 systemd[1]: run-utsns-4a611f23\x2d068b\x2d4536\x2db53b\x2d28bdf895550e.mount: Deactivated successfully. Feb 23 19:12:40 ip-10-0-136-68 systemd[1]: run-utsns-fbc853d6\x2d9011\x2d45d6\x2d8041\x2d4ee6aaaf3cb7.mount: Deactivated successfully. Feb 23 19:12:40 ip-10-0-136-68 systemd[1]: run-ipcns-4a611f23\x2d068b\x2d4536\x2db53b\x2d28bdf895550e.mount: Deactivated successfully. Feb 23 19:12:40 ip-10-0-136-68 systemd[1]: run-ipcns-fbc853d6\x2d9011\x2d45d6\x2d8041\x2d4ee6aaaf3cb7.mount: Deactivated successfully. Feb 23 19:12:40 ip-10-0-136-68 systemd[1]: run-netns-4a611f23\x2d068b\x2d4536\x2db53b\x2d28bdf895550e.mount: Deactivated successfully. Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.273339552Z" level=info msg="runSandbox: deleting pod ID 5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e from idIndex" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.273385164Z" level=info msg="runSandbox: removing pod sandbox 5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.273424635Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.273446684Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.276307906Z" level=info msg="runSandbox: deleting pod ID d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93 from idIndex" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.276342855Z" level=info msg="runSandbox: removing pod sandbox d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.276369444Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.276384712Z" level=info msg="runSandbox: unmounting shmPath for sandbox d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.280297392Z" level=info msg="runSandbox: removing pod sandbox from storage: 5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.281939737Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.281971071Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=a0ef6373-78c5-4614-9ef2-569454965d44 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:40.282177 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:12:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:40.282294 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:12:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:40.282330 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:12:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:40.282407 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.283321215Z" level=info msg="runSandbox: removing pod sandbox from storage: d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.284814585Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:40.284872682Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=67abd74d-518c-412e-8a4f-2e2e6a0ef1db name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:40.285035 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:12:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:40.285092 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:12:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:40.285128 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:12:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:40.285212 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:12:41 ip-10-0-136-68 systemd[1]: run-netns-fbc853d6\x2d9011\x2d45d6\x2d8041\x2d4ee6aaaf3cb7.mount: Deactivated successfully. Feb 23 19:12:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5d2563f2adc120c8daf2c4bc1eb4839df33293fee13791419820d48b22452c4e-userdata-shm.mount: Deactivated successfully. Feb 23 19:12:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d538b060e85a6c77907d504f8943d6f8511a7b9f846a83b681f0d50da95deb93-userdata-shm.mount: Deactivated successfully. Feb 23 19:12:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:45.217272 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:12:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:45.217681 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:12:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:46.217070 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:12:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:46.217437766Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:46.217505230Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:12:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:46.223803159Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/47d0a03e-7bcf-420e-949d-ad348d271311 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:12:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:46.223836931Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:12:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:53.216752 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:12:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:53.216752 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:12:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:53.217195703Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:53.217203855Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:12:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:53.217291422Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:12:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:53.217307671Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:12:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:53.225611417Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/f70bb75e-feae-4afa-8c43-71bb7a9c74de Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:12:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:53.225750397Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:12:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:53.226062915Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/58220164-0d83-424b-9eef-701ae0b251c7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:12:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:12:53.226097366Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:12:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:12:56.217135 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:56.217595 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:56.292111 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:56.292440 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:56.292639 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:56.292673 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:12:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:59.217405 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:59.217709 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:59.217983 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:12:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:12:59.218018 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:13:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:11.217331 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:13:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:11.217744 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.248007649Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.248059952Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 systemd[1]: run-utsns-cbdf9604\x2db01c\x2d40a8\x2db781\x2deafdcd9269bb.mount: Deactivated successfully. Feb 23 19:13:19 ip-10-0-136-68 systemd[1]: run-ipcns-cbdf9604\x2db01c\x2d40a8\x2db781\x2deafdcd9269bb.mount: Deactivated successfully. Feb 23 19:13:19 ip-10-0-136-68 systemd[1]: run-netns-cbdf9604\x2db01c\x2d40a8\x2db781\x2deafdcd9269bb.mount: Deactivated successfully. Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.277337585Z" level=info msg="runSandbox: deleting pod ID 0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247 from idIndex" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.277373417Z" level=info msg="runSandbox: removing pod sandbox 0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.277417417Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.277433128Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247-userdata-shm.mount: Deactivated successfully. Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.285306760Z" level=info msg="runSandbox: removing pod sandbox from storage: 0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.286812172Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:19.286843791Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=ba8642a2-e22a-49d0-adc9-780231eebb33 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:19.287045 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:13:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:19.287106 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:13:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:19.287132 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:13:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:19.287190 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(0a1aee95ad88859b7ec79432fc64d2a07564b9a2ec95ac82cdd71273d5c74247): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:20.235954873Z" level=info msg="NetworkStart: stopping network for sandbox 812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:20.236070364Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/9a6a6ad9-dffc-4548-bb5c-ba27293d8fe7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:20.236095982Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:20.236104132Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:13:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:20.236113317Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:13:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:23.217101 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.217924990Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=a8f092c9-dcdd-4913-99bd-534094612d57 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.218119449Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a8f092c9-dcdd-4913-99bd-534094612d57 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.218691540Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=43c1b24c-3ea5-4f28-ad97-93599034ec91 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.218829604Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=43c1b24c-3ea5-4f28-ad97-93599034ec91 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.219473201Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=7cba1c52-955d-490b-a95a-c6bd513fd996 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.219569123Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:13:23 ip-10-0-136-68 systemd[1]: Started crio-conmon-c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155.scope. Feb 23 19:13:23 ip-10-0-136-68 systemd[1]: Started libcontainer container c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155. Feb 23 19:13:23 ip-10-0-136-68 conmon[14404]: conmon c9453d4a4d8b55e1e5cf : Failed to write to cgroup.event_control Operation not supported Feb 23 19:13:23 ip-10-0-136-68 systemd[1]: crio-conmon-c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155.scope: Deactivated successfully. Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.368905628Z" level=info msg="Created container c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=7cba1c52-955d-490b-a95a-c6bd513fd996 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.369407219Z" level=info msg="Starting container: c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155" id=49a9c4b1-5837-402a-8fa6-6b09c96bebbf name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:13:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:23.376499103Z" level=info msg="Started container" PID=14416 containerID=c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=49a9c4b1-5837-402a-8fa6-6b09c96bebbf name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:13:23 ip-10-0-136-68 systemd[1]: crio-c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155.scope: Deactivated successfully. Feb 23 19:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:26.292581 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:26.292950 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:26.293209 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:26.293262 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:27.260927242Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=324f8cf9-0890-47e0-a6ca-db5f886f91eb name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:13:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:27.261916 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155} Feb 23 19:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:31.237413683Z" level=info msg="NetworkStart: stopping network for sandbox 4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:31.237523275Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/47d0a03e-7bcf-420e-949d-ad348d271311 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:31.237553993Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:31.237561947Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:13:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:31.237568093Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:13:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:33.217170 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:13:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:33.217585125Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:33.217640814Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:13:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:33.223011054Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/9381dae8-d481-4b8d-b7ed-cd89f4607bdf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:13:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:33.223038441Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:13:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:34.872551 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:13:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:34.872613 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.241086430Z" level=info msg="NetworkStart: stopping network for sandbox 1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.241215668Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/58220164-0d83-424b-9eef-701ae0b251c7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.241273175Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.241283883Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.241293747Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.241892809Z" level=info msg="NetworkStart: stopping network for sandbox cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.241967081Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/f70bb75e-feae-4afa-8c43-71bb7a9c74de Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.241993543Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.242001075Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:13:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:13:38.242007067Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:13:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:44.872805 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:13:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:44.872876 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:13:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:54.872734 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:13:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:13:54.872801 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:56.292591 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:56.292897 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:56.293117 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:13:56.293151 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:14:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:04.872598 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:14:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:04.872655 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.245925409Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.245976739Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 systemd[1]: run-utsns-9a6a6ad9\x2ddffc\x2d4548\x2dbb5c\x2dba27293d8fe7.mount: Deactivated successfully. Feb 23 19:14:05 ip-10-0-136-68 systemd[1]: run-ipcns-9a6a6ad9\x2ddffc\x2d4548\x2dbb5c\x2dba27293d8fe7.mount: Deactivated successfully. Feb 23 19:14:05 ip-10-0-136-68 systemd[1]: run-netns-9a6a6ad9\x2ddffc\x2d4548\x2dbb5c\x2dba27293d8fe7.mount: Deactivated successfully. Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.262321760Z" level=info msg="runSandbox: deleting pod ID 812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb from idIndex" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.262354365Z" level=info msg="runSandbox: removing pod sandbox 812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.262382591Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.262397604Z" level=info msg="runSandbox: unmounting shmPath for sandbox 812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb-userdata-shm.mount: Deactivated successfully. Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.266315020Z" level=info msg="runSandbox: removing pod sandbox from storage: 812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.267795933Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:05.267823365Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=25276540-72ec-4072-a5a6-3698748e6f29 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:05.268012 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:14:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:05.268064 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:14:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:05.268090 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:14:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:05.268142 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(812389d0a4ea892037721905d05279ae8d47b713307510206e1f559fbd3280fb): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:14:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:14.872348 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:14:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:14.872420 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:14:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:14.872453 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:14:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:14.872969 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:14:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:14.873139 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155" gracePeriod=30 Feb 23 19:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:14.873408539Z" level=info msg="Stopping container: c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155 (timeout: 30s)" id=0a58df39-6299-4d73-812d-c45ebf138537 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:14:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:16.217113 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.217475601Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.217537974Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.223472835Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ca7637f5-584f-4202-b652-24838f0ed7c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.223643850Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.246630594Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.246681990Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 systemd[1]: run-utsns-47d0a03e\x2d7bcf\x2d420e\x2d949d\x2dad348d271311.mount: Deactivated successfully. Feb 23 19:14:16 ip-10-0-136-68 systemd[1]: run-ipcns-47d0a03e\x2d7bcf\x2d420e\x2d949d\x2dad348d271311.mount: Deactivated successfully. Feb 23 19:14:16 ip-10-0-136-68 systemd[1]: run-netns-47d0a03e\x2d7bcf\x2d420e\x2d949d\x2dad348d271311.mount: Deactivated successfully. Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.262318741Z" level=info msg="runSandbox: deleting pod ID 4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724 from idIndex" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.262355305Z" level=info msg="runSandbox: removing pod sandbox 4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.262396328Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.262417002Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.269315104Z" level=info msg="runSandbox: removing pod sandbox from storage: 4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.270792695Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:16.270820115Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0a276619-b4c6-4ada-abe1-81f3daa8c914 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:16.271015 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:14:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:16.271066 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:14:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:16.271096 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:14:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:16.271148 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:14:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4bc7e6ddba0ab7c02027cef783b0f4f5c1bdd819c6e6857b4e049e8f5520d724-userdata-shm.mount: Deactivated successfully. Feb 23 19:14:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:18.234828839Z" level=info msg="NetworkStart: stopping network for sandbox 838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:18.234961895Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/9381dae8-d481-4b8d-b7ed-cd89f4607bdf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:14:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:18.234994287Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:14:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:18.235002965Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:14:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:18.235009901Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:14:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:18.634062374Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=0a58df39-6299-4d73-812d-c45ebf138537 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:14:18 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a0285a819a072c7195c98804885924de1aa3f92cea2f581c07da86144e2c8d82-merged.mount: Deactivated successfully. Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.402181659Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=0a58df39-6299-4d73-812d-c45ebf138537 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.404782309Z" level=info msg="Stopped container c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=0a58df39-6299-4d73-812d-c45ebf138537 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.405541224Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=aa852989-bc75-4c08-9170-10853e8c59ad name=/runtime.v1.ImageService/ImageStatus Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.405718647Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=aa852989-bc75-4c08-9170-10853e8c59ad name=/runtime.v1.ImageService/ImageStatus Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.406310501Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=531f0021-1342-46f1-af42-6d99fe0d6a9f name=/runtime.v1.ImageService/ImageStatus Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.406442854Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=531f0021-1342-46f1-af42-6d99fe0d6a9f name=/runtime.v1.ImageService/ImageStatus Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.407063995Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d3469107-35c1-48e3-94e2-ea133de7131a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.407164743Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:14:22 ip-10-0-136-68 systemd[1]: Started crio-conmon-c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d.scope. Feb 23 19:14:22 ip-10-0-136-68 systemd[1]: Started libcontainer container c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d. Feb 23 19:14:22 ip-10-0-136-68 conmon[14566]: conmon c04fcb3de8db6fe843ab : Failed to write to cgroup.event_control Operation not supported Feb 23 19:14:22 ip-10-0-136-68 systemd[1]: crio-conmon-c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d.scope: Deactivated successfully. Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.553398755Z" level=info msg="Created container c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d3469107-35c1-48e3-94e2-ea133de7131a name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.553872462Z" level=info msg="Starting container: c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" id=653a0c3e-e5ee-4fcc-b5ae-40c1163cc6d8 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:22.573763419Z" level=info msg="Started container" PID=14579 containerID=c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=653a0c3e-e5ee-4fcc-b5ae-40c1163cc6d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:14:22 ip-10-0-136-68 systemd[1]: crio-c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d.scope: Deactivated successfully. Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.077390645Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=1cb25db3-b729-4437-a073-9be49a705227 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.253077044Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.253165800Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.253210983Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.253276878Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 systemd[1]: run-utsns-58220164\x2d0d83\x2d424b\x2d9eef\x2d701ae0b251c7.mount: Deactivated successfully. Feb 23 19:14:23 ip-10-0-136-68 systemd[1]: run-utsns-f70bb75e\x2dfeae\x2d4afa\x2d8c43\x2d71bb7a9c74de.mount: Deactivated successfully. Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.275378455Z" level=info msg="runSandbox: deleting pod ID cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07 from idIndex" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.275428211Z" level=info msg="runSandbox: removing pod sandbox cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.275464639Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.275477763Z" level=info msg="runSandbox: unmounting shmPath for sandbox cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.275379279Z" level=info msg="runSandbox: deleting pod ID 1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373 from idIndex" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.275539019Z" level=info msg="runSandbox: removing pod sandbox 1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.275561424Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.275577206Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.280388720Z" level=info msg="runSandbox: removing pod sandbox from storage: 1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.280389666Z" level=info msg="runSandbox: removing pod sandbox from storage: cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.282096841Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.282135132Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=59231cac-95d0-467e-8c16-d92b9720db91 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:23.282344 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:14:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:23.282608 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:14:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:23.282648 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:14:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:23.282737 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.283594413Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:23.283619729Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=c0e9fc43-e654-42ba-a0bc-3e03975a4b9d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:23.283752 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:14:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:23.283820 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:14:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:23.283857 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:14:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:23.283934 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:14:23 ip-10-0-136-68 systemd[1]: run-netns-58220164\x2d0d83\x2d424b\x2d9eef\x2d701ae0b251c7.mount: Deactivated successfully. Feb 23 19:14:23 ip-10-0-136-68 systemd[1]: run-ipcns-58220164\x2d0d83\x2d424b\x2d9eef\x2d701ae0b251c7.mount: Deactivated successfully. Feb 23 19:14:23 ip-10-0-136-68 systemd[1]: run-netns-f70bb75e\x2dfeae\x2d4afa\x2d8c43\x2d71bb7a9c74de.mount: Deactivated successfully. Feb 23 19:14:23 ip-10-0-136-68 systemd[1]: run-ipcns-f70bb75e\x2dfeae\x2d4afa\x2d8c43\x2d71bb7a9c74de.mount: Deactivated successfully. Feb 23 19:14:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1a05cb14f8d54b305cd12f88dc806a9f9c9fa613197597c3b04f7be519c0a373-userdata-shm.mount: Deactivated successfully. Feb 23 19:14:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cc2fd5c22aafc64738c5f22e06ae14772503ed67902307ddaba228fd6c8eee07-userdata-shm.mount: Deactivated successfully. Feb 23 19:14:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:25.217299 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:25.217608 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:25.217853 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:25.217899 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:26.291984 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:26.292281 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:26.292486 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:26.292527 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:26.827046074Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=e29bfdbb-3445-42cd-8ce8-c3518527b7b8 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:14:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:26.827950 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155" exitCode=-1 Feb 23 19:14:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:26.827990 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155} Feb 23 19:14:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:26.828021 2199 scope.go:115] "RemoveContainer" containerID="90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" Feb 23 19:14:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:30.588939744Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=66f1cc2b-434b-4bd9-a432-809298784c35 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:14:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:31.216381 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:14:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:31.216721591Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:31.216775329Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:14:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:31.223111595Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/9c7021a5-1c33-4483-8a3a-90be5e04d03d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:14:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:31.223148435Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:14:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:31.567809955Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=537890d6-5314-4586-81af-b6c0fc920948 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:14:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:34.351004667Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=624eac78-cfdd-478e-b2ea-5ce85bb8b823 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:14:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:34.351456329Z" level=info msg="Removing container: 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742" id=8839f8b0-a390-4cc9-895f-d413a1b36e1a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:35.305838841Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=89314345-ebe1-47a3-bc5a-cb1a58d000ae name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:14:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:35.306850 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d} Feb 23 19:14:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:36.217425 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:36.217841474Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:36.217904659Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:36.224332542Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/d07ffad7-db00-45bc-9c3b-a855a60d259e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:36.224369974Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:14:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:37.216729 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:14:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:37.217155405Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:14:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:37.217207398Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:14:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:37.223311220Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/11e5950f-6568-4f0d-b068-c9544ddd1fbc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:14:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:37.223348385Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:14:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:38.111358231Z" level=warning msg="Failed to find container exit file for 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: timed out waiting for the condition" id=8839f8b0-a390-4cc9-895f-d413a1b36e1a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:14:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:38.136795271Z" level=info msg="Removed container 90ab87dda457b2284c2571d0f3c7c38feccc954681d2a6889cee3bfa56705742: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=8839f8b0-a390-4cc9-895f-d413a1b36e1a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:14:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:14:42.061966309Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=29630f75-fddf-4267-9bb1-b521d67546f6 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:14:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:44.872630 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:14:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:44.872684 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:14:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:54.872239 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:14:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:14:54.872330 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:56.292418 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:56.292706 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:56.292922 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:14:56.292974 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:01.237223203Z" level=info msg="NetworkStart: stopping network for sandbox 002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:01.237369392Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ca7637f5-584f-4202-b652-24838f0ed7c8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:01.237396954Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:01.237404260Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:15:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:01.237411942Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.244977361Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.245024970Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 systemd[1]: run-utsns-9381dae8\x2dd481\x2d4b8d\x2db7ed\x2dcd89f4607bdf.mount: Deactivated successfully. Feb 23 19:15:03 ip-10-0-136-68 systemd[1]: run-ipcns-9381dae8\x2dd481\x2d4b8d\x2db7ed\x2dcd89f4607bdf.mount: Deactivated successfully. Feb 23 19:15:03 ip-10-0-136-68 systemd[1]: run-netns-9381dae8\x2dd481\x2d4b8d\x2db7ed\x2dcd89f4607bdf.mount: Deactivated successfully. Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.272326698Z" level=info msg="runSandbox: deleting pod ID 838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc from idIndex" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.272367610Z" level=info msg="runSandbox: removing pod sandbox 838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.272415292Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.272435112Z" level=info msg="runSandbox: unmounting shmPath for sandbox 838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc-userdata-shm.mount: Deactivated successfully. Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.277307298Z" level=info msg="runSandbox: removing pod sandbox from storage: 838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.279307056Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:03.279359365Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f6ce3bb5-48f7-4cbf-9a55-49e46669203d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:03.281899 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:15:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:03.282341 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:15:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:03.282417 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:15:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:03.282507 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(838d3e3dc0703af167f0bee2e0f963a9bb4ff1850e5c7eaee7b243c01c7d13fc): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:15:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:04.872931 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:15:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:04.872989 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:15:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:14.872085 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:15:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:14.872147 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:15:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:15.216562 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:15:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:15.216887320Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:15.216948329Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:15:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:15.223102324Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/643adcd0-aa13-42e7-a5be-8f35f25d575f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:15:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:15.223139772Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:15:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:16.236948562Z" level=info msg="NetworkStart: stopping network for sandbox 6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:16.237073042Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/9c7021a5-1c33-4483-8a3a-90be5e04d03d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:15:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:16.237115958Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:15:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:16.237129310Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:15:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:16.237139889Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:15:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:20.220475171Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=60e228f4-f07a-45e9-93b3-b2ce502bcf05 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:15:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:20.220922274Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=60e228f4-f07a-45e9-93b3-b2ce502bcf05 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:15:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:21.238122727Z" level=info msg="NetworkStart: stopping network for sandbox 629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:21.238262746Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/d07ffad7-db00-45bc-9c3b-a855a60d259e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:15:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:21.238295544Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:15:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:21.238306176Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:15:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:21.238314701Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:15:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:22.236468097Z" level=info msg="NetworkStart: stopping network for sandbox 004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:22.236575789Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/11e5950f-6568-4f0d-b068-c9544ddd1fbc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:15:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:22.236603859Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:15:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:22.236610567Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:15:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:22.236616622Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:15:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:24.872362 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:15:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:24.872419 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:15:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:24.872453 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:15:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:24.872930 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:15:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:24.873094 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" gracePeriod=30 Feb 23 19:15:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:24.873325442Z" level=info msg="Stopping container: c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d (timeout: 30s)" id=8ae784f1-ba03-43a3-a853-47e78cb5fac0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:26.291774 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:26.291986 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:26.292209 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:26.292236 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:15:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:28.634060617Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=8ae784f1-ba03-43a3-a853-47e78cb5fac0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:15:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-e4cbf4264b35ffc4d84cde85b00d18cda5498efaa03806a6b0c177d1c500be82-merged.mount: Deactivated successfully. Feb 23 19:15:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:32.416005738Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=8ae784f1-ba03-43a3-a853-47e78cb5fac0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:15:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:32.417563288Z" level=info msg="Stopped container c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=8ae784f1-ba03-43a3-a853-47e78cb5fac0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:15:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:32.418148 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:15:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:32.885214225Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=ec076d14-1156-4d99-b35b-6d88ede7a306 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:15:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:36.634929373Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=b4efb64b-d98f-4744-96e7-8b2a587f25c0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:15:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:36.635889 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" exitCode=-1 Feb 23 19:15:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:36.635936 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d} Feb 23 19:15:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:36.635981 2199 scope.go:115] "RemoveContainer" containerID="c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155" Feb 23 19:15:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:37.637770 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:15:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:37.638162 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:15:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:40.394925154Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=30bf35f1-9070-421c-9d93-f453cfd741ac name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:44.143989078Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=73307b07-e2b7-484c-9046-bfb5efcb114e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:15:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:44.144598435Z" level=info msg="Removing container: c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155" id=0527cacc-d5a6-40ef-8c24-0828a63bb1fd name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.246423393Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.246470657Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 systemd[1]: run-utsns-ca7637f5\x2d584f\x2d4202\x2db652\x2d24838f0ed7c8.mount: Deactivated successfully. Feb 23 19:15:46 ip-10-0-136-68 systemd[1]: run-ipcns-ca7637f5\x2d584f\x2d4202\x2db652\x2d24838f0ed7c8.mount: Deactivated successfully. Feb 23 19:15:46 ip-10-0-136-68 systemd[1]: run-netns-ca7637f5\x2d584f\x2d4202\x2db652\x2d24838f0ed7c8.mount: Deactivated successfully. Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.264333314Z" level=info msg="runSandbox: deleting pod ID 002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3 from idIndex" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.264367092Z" level=info msg="runSandbox: removing pod sandbox 002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.264394566Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.264408272Z" level=info msg="runSandbox: unmounting shmPath for sandbox 002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3-userdata-shm.mount: Deactivated successfully. Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.269311979Z" level=info msg="runSandbox: removing pod sandbox from storage: 002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.270908800Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:46.270939119Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=3a5cc8a8-0f8d-414b-9418-f6e715558809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:15:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:46.271124 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:15:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:46.271180 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:15:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:46.271210 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:15:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:46.271379 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(002f3342b6f60f0218c53b15f09a1dbeb114b4ac5fa1ccf33d380c1535589bb3): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:15:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:47.905980749Z" level=warning msg="Failed to find container exit file for c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: timed out waiting for the condition" id=0527cacc-d5a6-40ef-8c24-0828a63bb1fd name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:15:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:47.918462279Z" level=info msg="Removed container c9453d4a4d8b55e1e5cfa39905e39e761825bfcf0ee3fd4c954b21cbc1da3155: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=0527cacc-d5a6-40ef-8c24-0828a63bb1fd name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:15:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:15:49.217657 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:15:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:49.219269 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:15:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:50.218235 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:50.218547 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:50.219010 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:50.219145 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:15:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:15:52.402948506Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=e440ebc2-3fd3-4854-8158-a3a6d64ac3d9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:56.292698 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:56.292931 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:56.293191 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:15:56.293219 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:00.237213233Z" level=info msg="NetworkStart: stopping network for sandbox 8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:00.237355870Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/643adcd0-aa13-42e7-a5be-8f35f25d575f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:00.237395945Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:00.237408115Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:16:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:00.237419054Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:16:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:01.216596 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.216928162Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.216988387Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.223039253Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1acfe2fa-f121-412d-9d6b-30d7d08ec4ff Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.223072164Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.246966038Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.247004137Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 systemd[1]: run-utsns-9c7021a5\x2d1c33\x2d4483\x2d8a3a\x2d90be5e04d03d.mount: Deactivated successfully. Feb 23 19:16:01 ip-10-0-136-68 systemd[1]: run-ipcns-9c7021a5\x2d1c33\x2d4483\x2d8a3a\x2d90be5e04d03d.mount: Deactivated successfully. Feb 23 19:16:01 ip-10-0-136-68 systemd[1]: run-netns-9c7021a5\x2d1c33\x2d4483\x2d8a3a\x2d90be5e04d03d.mount: Deactivated successfully. Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.270326228Z" level=info msg="runSandbox: deleting pod ID 6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e from idIndex" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.270369004Z" level=info msg="runSandbox: removing pod sandbox 6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.270390800Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.270403968Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.275300966Z" level=info msg="runSandbox: removing pod sandbox from storage: 6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.276716792Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:01.276742993Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7e94ecc8-a557-4695-b44a-ac59e8ff379a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:01.276903 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:16:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:01.276952 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:16:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:01.276978 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:16:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:01.277033 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:16:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:02.217458 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:16:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:02.218067 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:16:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6b41a235adcdd456cf4d535518178067a76dde6df9eed9c9e6d8ab688485c96e-userdata-shm.mount: Deactivated successfully. Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.248634690Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.248683544Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 systemd[1]: run-utsns-d07ffad7\x2ddb00\x2d45bc\x2d9c3b\x2da855a60d259e.mount: Deactivated successfully. Feb 23 19:16:06 ip-10-0-136-68 systemd[1]: run-ipcns-d07ffad7\x2ddb00\x2d45bc\x2d9c3b\x2da855a60d259e.mount: Deactivated successfully. Feb 23 19:16:06 ip-10-0-136-68 systemd[1]: run-netns-d07ffad7\x2ddb00\x2d45bc\x2d9c3b\x2da855a60d259e.mount: Deactivated successfully. Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.269332042Z" level=info msg="runSandbox: deleting pod ID 629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb from idIndex" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.269374626Z" level=info msg="runSandbox: removing pod sandbox 629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.269405919Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.269418339Z" level=info msg="runSandbox: unmounting shmPath for sandbox 629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb-userdata-shm.mount: Deactivated successfully. Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.284317552Z" level=info msg="runSandbox: removing pod sandbox from storage: 629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.285942772Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:06.285980458Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=feb50c2d-acfd-4fb3-916f-a7836f7faefe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:06.286182 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:16:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:06.286237 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:16:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:06.286330 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:16:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:06.286396 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(629ccd1886a7986f35908712f681b8667083f856015ac368f68aec3879bec6bb): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.246509975Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.246577423Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 systemd[1]: run-utsns-11e5950f\x2d6568\x2d4f0d\x2db068\x2dc9544ddd1fbc.mount: Deactivated successfully. Feb 23 19:16:07 ip-10-0-136-68 systemd[1]: run-ipcns-11e5950f\x2d6568\x2d4f0d\x2db068\x2dc9544ddd1fbc.mount: Deactivated successfully. Feb 23 19:16:07 ip-10-0-136-68 systemd[1]: run-netns-11e5950f\x2d6568\x2d4f0d\x2db068\x2dc9544ddd1fbc.mount: Deactivated successfully. Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.264340851Z" level=info msg="runSandbox: deleting pod ID 004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823 from idIndex" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.264391327Z" level=info msg="runSandbox: removing pod sandbox 004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.264432716Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.264455475Z" level=info msg="runSandbox: unmounting shmPath for sandbox 004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.268313940Z" level=info msg="runSandbox: removing pod sandbox from storage: 004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823-userdata-shm.mount: Deactivated successfully. Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.270003476Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:07.270032714Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=68e0b38e-8408-4d52-8dd7-ec2598245eaa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:07.270268 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:16:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:07.270330 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:16:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:07.270354 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:16:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:07.270414 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(004be3fa89697c0af4ca30e194d30089b3d3e4f068bef082bd7230f89ed0a823): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:16:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:13.217298 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:16:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:13.217696 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:16:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:14.217327 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:16:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:14.217787038Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:14.217861052Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:16:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:14.223798427Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/9043edd0-84dd-4f88-9bb6-045fc7f55d65 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:16:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:14.223826967Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:16:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:18.216547 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:16:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:18.216846442Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:18.216908591Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:16:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:18.223323664Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/a89caf2e-336c-45c6-ac39-468261112223 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:16:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:18.223360822Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:16:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:21.217095 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:21.217438853Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:21.217509048Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:21.223705397Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6291dc33-083e-4b2e-9229-cb5a46ea1f94 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:21.223741839Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:26.292718 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:26.293004 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:26.293311 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:26.293347 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:16:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:28.217497 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:16:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:28.218097 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:16:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:42.217471 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:16:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:42.217874 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.247215900Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.247296039Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 systemd[1]: run-utsns-643adcd0\x2daa13\x2d42e7\x2da5be\x2d8f35f25d575f.mount: Deactivated successfully. Feb 23 19:16:45 ip-10-0-136-68 systemd[1]: run-ipcns-643adcd0\x2daa13\x2d42e7\x2da5be\x2d8f35f25d575f.mount: Deactivated successfully. Feb 23 19:16:45 ip-10-0-136-68 systemd[1]: run-netns-643adcd0\x2daa13\x2d42e7\x2da5be\x2d8f35f25d575f.mount: Deactivated successfully. Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.266343967Z" level=info msg="runSandbox: deleting pod ID 8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a from idIndex" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.266385093Z" level=info msg="runSandbox: removing pod sandbox 8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.266416125Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.266429860Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a-userdata-shm.mount: Deactivated successfully. Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.273305373Z" level=info msg="runSandbox: removing pod sandbox from storage: 8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.274874873Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:45.274904349Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=b027d2b8-3c87-4e0b-9980-5419bfafd958 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:45.275118 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:16:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:45.275176 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:16:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:45.275199 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:16:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:45.275304 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8d56cc8c7572829757f828d47365bd0b078e6c23ee52d6662f6ee66115eb285a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:16:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:46.236733067Z" level=info msg="NetworkStart: stopping network for sandbox 1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:46.236897933Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1acfe2fa-f121-412d-9d6b-30d7d08ec4ff Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:16:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:46.236933524Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:16:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:46.236944794Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:16:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:46.236957442Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:16:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:56.217192 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:56.217828 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:56.292464 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:56.292766 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:56.292949 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:56.292974 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:16:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:58.217583 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:16:58.217793 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:16:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:58.218204655Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:58.218386 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:58.218635 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:16:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:58.218644945Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:16:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:16:58.218674 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:16:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:58.224709646Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/cacf87fc-171b-454f-9c81-094e6c1a9bd6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:16:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:58.224744143Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:16:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:59.236028333Z" level=info msg="NetworkStart: stopping network for sandbox c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:16:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:59.236175408Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/9043edd0-84dd-4f88-9bb6-045fc7f55d65 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:16:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:59.236215467Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:16:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:59.236227683Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:16:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:16:59.236238052Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:17:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:03.236387503Z" level=info msg="NetworkStart: stopping network for sandbox 2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:03.236513151Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/a89caf2e-336c-45c6-ac39-468261112223 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:17:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:03.236541823Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:17:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:03.236549843Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:17:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:03.236556725Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:17:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:06.237130088Z" level=info msg="NetworkStart: stopping network for sandbox 7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:06.237271762Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6291dc33-083e-4b2e-9229-cb5a46ea1f94 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:17:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:06.237301481Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:17:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:06.237309133Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:17:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:06.237315776Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:17:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:17:11.216911 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:17:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:11.217948 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:17:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:17:26.217131 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:26.217528 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:26.292472 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:26.292731 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:26.292935 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:26.292964 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.247422442Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.247476771Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 systemd[1]: run-utsns-1acfe2fa\x2df121\x2d412d\x2d9d6b\x2d30d7d08ec4ff.mount: Deactivated successfully. Feb 23 19:17:31 ip-10-0-136-68 systemd[1]: run-ipcns-1acfe2fa\x2df121\x2d412d\x2d9d6b\x2d30d7d08ec4ff.mount: Deactivated successfully. Feb 23 19:17:31 ip-10-0-136-68 systemd[1]: run-netns-1acfe2fa\x2df121\x2d412d\x2d9d6b\x2d30d7d08ec4ff.mount: Deactivated successfully. Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.264319101Z" level=info msg="runSandbox: deleting pod ID 1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1 from idIndex" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.264355124Z" level=info msg="runSandbox: removing pod sandbox 1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.264386711Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.264400570Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1-userdata-shm.mount: Deactivated successfully. Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.270318392Z" level=info msg="runSandbox: removing pod sandbox from storage: 1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.271785122Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:31.271817777Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=3e8193a4-3710-4515-8f30-aa5919f58320 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:31.272047 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:17:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:31.272107 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:17:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:31.272132 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:17:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:31.272187 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(1b668c7d9049a52713fd2c45f26e8e5c47caee9dbe71f04ba91a0f9e057f1ee1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:17:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:17:37.216588 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:17:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:37.216954 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:17:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:43.238653700Z" level=info msg="NetworkStart: stopping network for sandbox 53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:43.238772211Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/cacf87fc-171b-454f-9c81-094e6c1a9bd6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:17:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:43.238799868Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:17:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:43.238807411Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:17:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:43.238815225Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.245661856Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.245729465Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 systemd[1]: run-utsns-9043edd0\x2d84dd\x2d4f88\x2d9bb6\x2d045fc7f55d65.mount: Deactivated successfully. Feb 23 19:17:44 ip-10-0-136-68 systemd[1]: run-ipcns-9043edd0\x2d84dd\x2d4f88\x2d9bb6\x2d045fc7f55d65.mount: Deactivated successfully. Feb 23 19:17:44 ip-10-0-136-68 systemd[1]: run-netns-9043edd0\x2d84dd\x2d4f88\x2d9bb6\x2d045fc7f55d65.mount: Deactivated successfully. Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.265344400Z" level=info msg="runSandbox: deleting pod ID c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a from idIndex" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.265386552Z" level=info msg="runSandbox: removing pod sandbox c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.265422005Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.265435880Z" level=info msg="runSandbox: unmounting shmPath for sandbox c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a-userdata-shm.mount: Deactivated successfully. Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.272307295Z" level=info msg="runSandbox: removing pod sandbox from storage: c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.273913604Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:44.273946818Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=8b40e231-22ce-41ce-88f2-673692cfd2d9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:44.274184 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:17:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:44.274268 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:17:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:44.274326 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:17:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:44.274421 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(c8e5ee6614223fa3b95fd66e306875629bffc5faa5492dd1bf39e7f3b9a1648a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:17:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:17:46.217222 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:17:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:46.217709109Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:46.217781907Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:17:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:46.223849624Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/00a72582-2164-4a97-9616-48dac5590253 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:17:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:46.223883609Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:17:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:17:48.216701 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:17:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:48.217310 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.246617790Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.246666830Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 systemd[1]: run-utsns-a89caf2e\x2d336c\x2d45c6\x2dac39\x2d468261112223.mount: Deactivated successfully. Feb 23 19:17:48 ip-10-0-136-68 systemd[1]: run-ipcns-a89caf2e\x2d336c\x2d45c6\x2dac39\x2d468261112223.mount: Deactivated successfully. Feb 23 19:17:48 ip-10-0-136-68 systemd[1]: run-netns-a89caf2e\x2d336c\x2d45c6\x2dac39\x2d468261112223.mount: Deactivated successfully. Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.265322378Z" level=info msg="runSandbox: deleting pod ID 2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4 from idIndex" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.265360579Z" level=info msg="runSandbox: removing pod sandbox 2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.265397134Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.265421874Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4-userdata-shm.mount: Deactivated successfully. Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.271309154Z" level=info msg="runSandbox: removing pod sandbox from storage: 2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.272860430Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:48.272888922Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1abe3ba0-93fc-4fa4-b203-8d7a51abdfdb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:48.273064 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:17:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:48.273111 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:17:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:48.273132 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:17:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:48.273185 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2a917cc639e57d5aeca33c21aa1d7c12cc76bdd60a1c80c935a736a2594099e4): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.247316037Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.247365604Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 systemd[1]: run-utsns-6291dc33\x2d083e\x2d4b2e\x2d9229\x2dcb5a46ea1f94.mount: Deactivated successfully. Feb 23 19:17:51 ip-10-0-136-68 systemd[1]: run-ipcns-6291dc33\x2d083e\x2d4b2e\x2d9229\x2dcb5a46ea1f94.mount: Deactivated successfully. Feb 23 19:17:51 ip-10-0-136-68 systemd[1]: run-netns-6291dc33\x2d083e\x2d4b2e\x2d9229\x2dcb5a46ea1f94.mount: Deactivated successfully. Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.285323230Z" level=info msg="runSandbox: deleting pod ID 7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4 from idIndex" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.285354331Z" level=info msg="runSandbox: removing pod sandbox 7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.285382586Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.285401843Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4-userdata-shm.mount: Deactivated successfully. Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.291305672Z" level=info msg="runSandbox: removing pod sandbox from storage: 7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.292836505Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:51.292865908Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=69bcceb1-a44e-40f7-8336-1cd8325e22f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:51.293046 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:17:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:51.293118 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:17:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:51.293156 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:17:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:51.293314 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7b449c99501240f96bf50c32ce8ccef9c8a2ef603fb5da2d856dad01d0efd9d4): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:56.291980 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:56.292213 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:56.292476 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:56.292505 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:17:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:17:57.217010 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:17:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:57.217433294Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:17:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:57.217490601Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:17:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:57.223406743Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/288d9c1e-d43a-443c-a43d-06e07e42fc69 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:17:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:17:57.223443589Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:17:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:17:59.216853 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:17:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:17:59.217267 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:18:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:18:03.216970 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:18:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:18:03.217065 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:18:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:03.217376861Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:03.217418312Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:03.217466033Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:18:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:03.217426347Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:18:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:03.225203848Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/81a32a13-4d24-4975-831e-10847326c108 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:18:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:03.225240610Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:18:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:03.225502248Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/4f611ad7-ac9e-442a-83fd-63519d70e623 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:18:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:03.225599147Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:18:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:18:13.216688 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:18:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:13.217113 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:18:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:21.217396 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:21.217748 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:21.218012 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:21.218045 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:26.292664 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:26.293185 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:26.293409 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:26.293439 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:18:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:18:28.216595 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:18:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:28.217141 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.248723647Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.248777377Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 systemd[1]: run-utsns-cacf87fc\x2d171b\x2d454f\x2d9c81\x2d094e6c1a9bd6.mount: Deactivated successfully. Feb 23 19:18:28 ip-10-0-136-68 systemd[1]: run-ipcns-cacf87fc\x2d171b\x2d454f\x2d9c81\x2d094e6c1a9bd6.mount: Deactivated successfully. Feb 23 19:18:28 ip-10-0-136-68 systemd[1]: run-netns-cacf87fc\x2d171b\x2d454f\x2d9c81\x2d094e6c1a9bd6.mount: Deactivated successfully. Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.270327704Z" level=info msg="runSandbox: deleting pod ID 53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7 from idIndex" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.270364062Z" level=info msg="runSandbox: removing pod sandbox 53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.270401883Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.270430226Z" level=info msg="runSandbox: unmounting shmPath for sandbox 53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7-userdata-shm.mount: Deactivated successfully. Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.275302871Z" level=info msg="runSandbox: removing pod sandbox from storage: 53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.276849387Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:28.276881054Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=7dcb0bf4-d78e-40be-9295-dd73b1fe7930 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:28.277081 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:18:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:28.277124 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:18:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:28.277155 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:18:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:28.277203 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(53c22eb686fc6073ab823e8173b3af678470c0ea83fe5950d759f695ce03baf7): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:31.236630401Z" level=info msg="NetworkStart: stopping network for sandbox 61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:31.236750347Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/00a72582-2164-4a97-9616-48dac5590253 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:31.236778137Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:31.236785654Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:18:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:31.236792166Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:18:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:18:39.216768 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:39.217180612Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:39.217273401Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:39.222911431Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/bb2caf3d-ade6-47bb-85f7-224f7b905bca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:39.222947825Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:18:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:18:42.217388 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:18:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:42.217861 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:18:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:42.236601213Z" level=info msg="NetworkStart: stopping network for sandbox d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:42.236726663Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/288d9c1e-d43a-443c-a43d-06e07e42fc69 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:18:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:42.236766541Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:18:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:42.236777518Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:18:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:42.236786693Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.240812459Z" level=info msg="NetworkStart: stopping network for sandbox 460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.240936765Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/4f611ad7-ac9e-442a-83fd-63519d70e623 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.240978155Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.240991098Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.241001337Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.241382276Z" level=info msg="NetworkStart: stopping network for sandbox fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.241457545Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/81a32a13-4d24-4975-831e-10847326c108 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.241492539Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.241500246Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:18:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:18:48.241506557Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:18:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:18:54.216748 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:18:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:54.217171 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:56.292420 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:56.292660 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:56.292874 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:18:56.292909 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:19:03 ip-10-0-136-68 sudo[13622]: pam_unix(sudo-i:session): session closed for user root Feb 23 19:19:03 ip-10-0-136-68 sshd[13621]: Received disconnect from 10.0.182.221 port 41250:11: disconnected by user Feb 23 19:19:03 ip-10-0-136-68 sshd[13621]: Disconnected from user core 10.0.182.221 port 41250 Feb 23 19:19:03 ip-10-0-136-68 sshd[13591]: pam_unix(sshd:session): session closed for user core Feb 23 19:19:03 ip-10-0-136-68 systemd-logind[985]: Session 1 logged out. Waiting for processes to exit. Feb 23 19:19:03 ip-10-0-136-68 systemd[1]: session-1.scope: Deactivated successfully. Feb 23 19:19:03 ip-10-0-136-68 systemd-logind[985]: Removed session 1. Feb 23 19:19:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:06.217203 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:19:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:06.217803 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: Stopping User Manager for UID 1000... Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Activating special unit Exit the Session... Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Removed slice User Background Tasks Slice. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Stopped target Main User Target. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Stopped target Basic System. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Stopped target Paths. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Stopped target Sockets. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Stopped target Timers. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Closed D-Bus User Message Bus Socket. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Stopped Create User's Volatile Files and Directories. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Removed slice User Application Slice. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Reached target Shutdown. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Finished Exit the Session. Feb 23 19:19:13 ip-10-0-136-68 systemd[13603]: Reached target Exit the Session. Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: user@1000.service: Deactivated successfully. Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: Stopped User Manager for UID 1000. Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: Stopping User Runtime Directory /run/user/1000... Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: run-user-1000.mount: Deactivated successfully. Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully. Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: Stopped User Runtime Directory /run/user/1000. Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: Removed slice User Slice of UID 1000. Feb 23 19:19:13 ip-10-0-136-68 systemd[1]: user-1000.slice: Consumed 1.324s CPU time. Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.246997689Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.247187732Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 systemd[1]: run-utsns-00a72582\x2d2164\x2d4a97\x2d9616\x2d48dac5590253.mount: Deactivated successfully. Feb 23 19:19:16 ip-10-0-136-68 systemd[1]: run-ipcns-00a72582\x2d2164\x2d4a97\x2d9616\x2d48dac5590253.mount: Deactivated successfully. Feb 23 19:19:16 ip-10-0-136-68 systemd[1]: run-netns-00a72582\x2d2164\x2d4a97\x2d9616\x2d48dac5590253.mount: Deactivated successfully. Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.274356291Z" level=info msg="runSandbox: deleting pod ID 61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197 from idIndex" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.274387829Z" level=info msg="runSandbox: removing pod sandbox 61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.274427047Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.274442018Z" level=info msg="runSandbox: unmounting shmPath for sandbox 61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197-userdata-shm.mount: Deactivated successfully. Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.279302194Z" level=info msg="runSandbox: removing pod sandbox from storage: 61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.281031689Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:16.281058987Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=ab655e16-648c-4fa1-9ab9-95c71f1f0c75 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:16.281227 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:19:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:16.281434 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:19:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:16.281458 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:19:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:16.281521 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(61c9fe5d3d0e4d2560850730b2d52d6b86943f54abb5bcad348584d523b29197): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:19:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:20.217468 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:19:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:20.217870 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:19:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:24.237582546Z" level=info msg="NetworkStart: stopping network for sandbox d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:24.237722673Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/bb2caf3d-ade6-47bb-85f7-224f7b905bca Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:19:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:24.237764783Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:19:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:24.237776551Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:19:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:24.237786490Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:26.292092 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:26.292374 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:26.292588 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:26.292623 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.247149608Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.247206849Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 systemd[1]: run-utsns-288d9c1e\x2dd43a\x2d443c\x2da43d\x2d06e07e42fc69.mount: Deactivated successfully. Feb 23 19:19:27 ip-10-0-136-68 systemd[1]: run-ipcns-288d9c1e\x2dd43a\x2d443c\x2da43d\x2d06e07e42fc69.mount: Deactivated successfully. Feb 23 19:19:27 ip-10-0-136-68 systemd[1]: run-netns-288d9c1e\x2dd43a\x2d443c\x2da43d\x2d06e07e42fc69.mount: Deactivated successfully. Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.267325721Z" level=info msg="runSandbox: deleting pod ID d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5 from idIndex" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.267369492Z" level=info msg="runSandbox: removing pod sandbox d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.267410964Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.267432478Z" level=info msg="runSandbox: unmounting shmPath for sandbox d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5-userdata-shm.mount: Deactivated successfully. Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.273332420Z" level=info msg="runSandbox: removing pod sandbox from storage: d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.274933058Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:27.274965519Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7b400b15-e816-423c-86be-d53ef2e2394c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:27.275183 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:19:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:27.275233 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:19:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:27.275283 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:19:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:27.275337 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d34ed24674b0c552a1663e44ca24acd6db241c557a46d12a5a653d6bea6a06b5): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:19:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:29.216970 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:19:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:29.217295 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:29.217519851Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:29.217862983Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:19:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:29.218274 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:29.218563 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:29.218629 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:19:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:29.224395042Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/77358733-7bfb-419a-b1a8-7d4bd000a087 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:19:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:29.224422429Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:33.217264 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.217677 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.252507816Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.252565527Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.252615568Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.252643605Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 systemd[1]: run-utsns-4f611ad7\x2dac9e\x2d442a\x2d83fd\x2d63519d70e623.mount: Deactivated successfully. Feb 23 19:19:33 ip-10-0-136-68 systemd[1]: run-utsns-81a32a13\x2d4d24\x2d4975\x2d831e\x2d10847326c108.mount: Deactivated successfully. Feb 23 19:19:33 ip-10-0-136-68 systemd[1]: run-ipcns-4f611ad7\x2dac9e\x2d442a\x2d83fd\x2d63519d70e623.mount: Deactivated successfully. Feb 23 19:19:33 ip-10-0-136-68 systemd[1]: run-ipcns-81a32a13\x2d4d24\x2d4975\x2d831e\x2d10847326c108.mount: Deactivated successfully. Feb 23 19:19:33 ip-10-0-136-68 systemd[1]: run-netns-4f611ad7\x2dac9e\x2d442a\x2d83fd\x2d63519d70e623.mount: Deactivated successfully. Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.274329344Z" level=info msg="runSandbox: deleting pod ID 460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430 from idIndex" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.274368308Z" level=info msg="runSandbox: removing pod sandbox 460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.274409437Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.274427951Z" level=info msg="runSandbox: unmounting shmPath for sandbox 460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.280322523Z" level=info msg="runSandbox: removing pod sandbox from storage: 460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.280339283Z" level=info msg="runSandbox: deleting pod ID fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498 from idIndex" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.280474600Z" level=info msg="runSandbox: removing pod sandbox fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.280509898Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.280532252Z" level=info msg="runSandbox: unmounting shmPath for sandbox fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.281993465Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.282021746Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=f1e29b88-bd41-4f91-9144-7c348ed3561f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.282212 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.282293 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.282326 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.282385 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.284318920Z" level=info msg="runSandbox: removing pod sandbox from storage: fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.285649376Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:33.285675556Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=48c31777-c6be-4652-9482-26b94a1436e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.285852 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.285891 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.285912 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:19:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:33.285968 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:19:34 ip-10-0-136-68 systemd[1]: run-netns-81a32a13\x2d4d24\x2d4975\x2d831e\x2d10847326c108.mount: Deactivated successfully. Feb 23 19:19:34 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-460900b98320fe36929b8be4ba2b960e0a800b2de091d4f87f0322ef62b39430-userdata-shm.mount: Deactivated successfully. Feb 23 19:19:34 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-fa748be00e839e29f8d0f6276baf31bc3fba7d622534c79df200ca99ba128498-userdata-shm.mount: Deactivated successfully. Feb 23 19:19:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:40.216887 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:19:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:40.217348183Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:40.217761899Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:19:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:40.225374601Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/da64e2fb-039e-477d-a957-ff31c7cc16f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:19:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:40.225406849Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:19:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:44.216506 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:19:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:44.216910 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:19:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:45.216429 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:45.216841100Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:45.216910717Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:45.222311005Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/95df9a17-015b-4293-868b-d2c1497cb44d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:45.222342804Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:19:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:48.217234 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:19:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:48.217681743Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:19:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:48.217745634Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:19:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:48.223505014Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/773f6368-51c8-4f1c-b808-aa0ebd598745 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:19:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:19:48.223532790Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:19:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:19:55.216494 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:19:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:55.217056 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:56.292474 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:56.292723 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:56.292946 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:19:56.292971 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:20:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:20:07.217114 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:20:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:07.217511 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.247457274Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.247506438Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 systemd[1]: run-utsns-bb2caf3d\x2dade6\x2d47bb\x2d85f7\x2d224f7b905bca.mount: Deactivated successfully. Feb 23 19:20:09 ip-10-0-136-68 systemd[1]: run-ipcns-bb2caf3d\x2dade6\x2d47bb\x2d85f7\x2d224f7b905bca.mount: Deactivated successfully. Feb 23 19:20:09 ip-10-0-136-68 systemd[1]: run-netns-bb2caf3d\x2dade6\x2d47bb\x2d85f7\x2d224f7b905bca.mount: Deactivated successfully. Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.278331259Z" level=info msg="runSandbox: deleting pod ID d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10 from idIndex" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.278363971Z" level=info msg="runSandbox: removing pod sandbox d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.278395773Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.278422449Z" level=info msg="runSandbox: unmounting shmPath for sandbox d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10-userdata-shm.mount: Deactivated successfully. Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.291303603Z" level=info msg="runSandbox: removing pod sandbox from storage: d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.292815416Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:09.292848116Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=694d706d-d022-4b0a-993f-f94a252febc5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:09.293060 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:20:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:09.293116 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:20:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:09.293143 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:20:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:09.293200 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d3515ee96388f979e17a8a0b2ba610bcc236d5c39737e84aa1cc806d98185d10): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:20:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:14.236744228Z" level=info msg="NetworkStart: stopping network for sandbox 2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:14.236865080Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/77358733-7bfb-419a-b1a8-7d4bd000a087 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:20:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:14.236906482Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:20:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:14.236917950Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:20:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:14.236930314Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:20:15 ip-10-0-136-68 NetworkManager[1177]: [1677180014.9999] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 19:20:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:20.223930110Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=d668d3ba-3f8d-423f-8e6a-10fbe9b8319a name=/runtime.v1.ImageService/ImageStatus Feb 23 19:20:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:20.224138937Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d668d3ba-3f8d-423f-8e6a-10fbe9b8319a name=/runtime.v1.ImageService/ImageStatus Feb 23 19:20:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:20:21.217048 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:20:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:21.217645233Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:21.217712745Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:20:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:21.222991273Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/29888820-5b14-4a74-b28a-a1632842b81f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:20:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:21.223027983Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:20:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:20:22.216807 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:20:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:22.217211 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:20:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:25.237162301Z" level=info msg="NetworkStart: stopping network for sandbox 8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:25.237310444Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/da64e2fb-039e-477d-a957-ff31c7cc16f8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:20:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:25.237345302Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:20:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:25.237352880Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:20:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:25.237359753Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:26.292096 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:26.292391 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:26.292632 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:26.292675 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:20:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:30.234661538Z" level=info msg="NetworkStart: stopping network for sandbox 8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:30.234768111Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/95df9a17-015b-4293-868b-d2c1497cb44d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:20:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:30.234796480Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:20:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:30.234804262Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:20:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:30.234811057Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:20:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:33.235081472Z" level=info msg="NetworkStart: stopping network for sandbox a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:33.235197296Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/773f6368-51c8-4f1c-b808-aa0ebd598745 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:20:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:33.235229683Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:20:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:33.235237867Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:20:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:33.235274610Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:20:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:20:37.216944 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.217805661Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=bee2e679-e1cc-4864-8467-558557a19875 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.217988604Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=bee2e679-e1cc-4864-8467-558557a19875 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.218612550Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=439ff377-b8b9-4ede-a09b-aa0575f24911 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.218773076Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=439ff377-b8b9-4ede-a09b-aa0575f24911 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.219407982Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=bf00a8b9-6aa0-4915-ad26-947bdb39a344 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.219528536Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:20:37 ip-10-0-136-68 systemd[1]: Started crio-conmon-16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c.scope. Feb 23 19:20:37 ip-10-0-136-68 systemd[1]: Started libcontainer container 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c. Feb 23 19:20:37 ip-10-0-136-68 conmon[15297]: conmon 16ea003db96e10fee523 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:20:37 ip-10-0-136-68 systemd[1]: crio-conmon-16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c.scope: Deactivated successfully. Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.369037944Z" level=info msg="Created container 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=bf00a8b9-6aa0-4915-ad26-947bdb39a344 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.369481730Z" level=info msg="Starting container: 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c" id=adba83f6-bed2-49de-b657-e5ec404c2ec5 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:20:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:37.376349766Z" level=info msg="Started container" PID=15309 containerID=16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=adba83f6-bed2-49de-b657-e5ec404c2ec5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:20:37 ip-10-0-136-68 systemd[1]: crio-16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c.scope: Deactivated successfully. Feb 23 19:20:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:41.603101792Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=a130bbff-321e-4418-aa87-ecc8f917e93b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:20:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:20:41.604240 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c} Feb 23 19:20:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:20:54.872608 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:20:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:20:54.872665 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:56.216908 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:56.217206 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:56.217437 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:56.217468 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:56.291718 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:56.291942 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:56.292117 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:20:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:56.292166 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.246981457Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.247028181Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 systemd[1]: run-utsns-77358733\x2d7bfb\x2d419a\x2db1a8\x2d7d4bd000a087.mount: Deactivated successfully. Feb 23 19:20:59 ip-10-0-136-68 systemd[1]: run-ipcns-77358733\x2d7bfb\x2d419a\x2db1a8\x2d7d4bd000a087.mount: Deactivated successfully. Feb 23 19:20:59 ip-10-0-136-68 systemd[1]: run-netns-77358733\x2d7bfb\x2d419a\x2db1a8\x2d7d4bd000a087.mount: Deactivated successfully. Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.285328138Z" level=info msg="runSandbox: deleting pod ID 2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa from idIndex" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.285362511Z" level=info msg="runSandbox: removing pod sandbox 2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.285398855Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.285426551Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa-userdata-shm.mount: Deactivated successfully. Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.292297148Z" level=info msg="runSandbox: removing pod sandbox from storage: 2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.293845632Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:20:59.293872504Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=cbab5d09-89c6-4139-892b-5fe62e24195c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:20:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:59.294084 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:20:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:59.294156 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:20:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:59.294196 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:20:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:20:59.294327 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2c6c02eaea1d8c0ed9906e75b33dc24cea625cf41a9b1bbbdd54831a6b47b7fa): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:21:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:04.872640 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:21:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:04.872704 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:21:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:06.234653436Z" level=info msg="NetworkStart: stopping network for sandbox dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:06.234774937Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/29888820-5b14-4a74-b28a-a1632842b81f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:21:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:06.234804166Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:21:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:06.234811874Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:21:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:06.234819123Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.246649676Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.246726810Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 systemd[1]: run-utsns-da64e2fb\x2d039e\x2d477d\x2da957\x2dff31c7cc16f8.mount: Deactivated successfully. Feb 23 19:21:10 ip-10-0-136-68 systemd[1]: run-ipcns-da64e2fb\x2d039e\x2d477d\x2da957\x2dff31c7cc16f8.mount: Deactivated successfully. Feb 23 19:21:10 ip-10-0-136-68 systemd[1]: run-netns-da64e2fb\x2d039e\x2d477d\x2da957\x2dff31c7cc16f8.mount: Deactivated successfully. Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.269339516Z" level=info msg="runSandbox: deleting pod ID 8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09 from idIndex" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.269380393Z" level=info msg="runSandbox: removing pod sandbox 8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.269424196Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.269440956Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09-userdata-shm.mount: Deactivated successfully. Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.275315430Z" level=info msg="runSandbox: removing pod sandbox from storage: 8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.276893705Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:10.276923958Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7e85053f-dea2-408e-ab92-74170054ade8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:10.277146 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:21:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:10.277204 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:21:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:10.277228 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:21:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:10.277315 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(8409109f784e73f0e892e51702cfce03504f72d2c9fcd8015b0bab4d1b520b09): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:21:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:11.216602 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:21:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:11.216990212Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:11.217066331Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:21:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:11.223041866Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/42db341a-7f4b-4bac-8195-78830fa823eb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:21:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:11.223077450Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:21:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:14.872550 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:21:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:14.872608 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.245035488Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.245079613Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 systemd[1]: run-utsns-95df9a17\x2d015b\x2d4293\x2d868b\x2dd2c1497cb44d.mount: Deactivated successfully. Feb 23 19:21:15 ip-10-0-136-68 systemd[1]: run-ipcns-95df9a17\x2d015b\x2d4293\x2d868b\x2dd2c1497cb44d.mount: Deactivated successfully. Feb 23 19:21:15 ip-10-0-136-68 systemd[1]: run-netns-95df9a17\x2d015b\x2d4293\x2d868b\x2dd2c1497cb44d.mount: Deactivated successfully. Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.270322881Z" level=info msg="runSandbox: deleting pod ID 8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f from idIndex" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.270350601Z" level=info msg="runSandbox: removing pod sandbox 8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.270378161Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.270393483Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f-userdata-shm.mount: Deactivated successfully. Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.276317766Z" level=info msg="runSandbox: removing pod sandbox from storage: 8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.277899356Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:15.277927443Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=25489c73-3328-4f4b-8e18-9bfcf063d031 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:15.278121 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:21:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:15.278175 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:21:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:15.278198 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:21:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:15.278273 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8d8007f636058b5fe0bd0354b0c4abc1103003195105503838f4f25c1749d26f): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.245190155Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.245238010Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 systemd[1]: run-utsns-773f6368\x2d51c8\x2d4f1c\x2db808\x2daa0ebd598745.mount: Deactivated successfully. Feb 23 19:21:18 ip-10-0-136-68 systemd[1]: run-ipcns-773f6368\x2d51c8\x2d4f1c\x2db808\x2daa0ebd598745.mount: Deactivated successfully. Feb 23 19:21:18 ip-10-0-136-68 systemd[1]: run-netns-773f6368\x2d51c8\x2d4f1c\x2db808\x2daa0ebd598745.mount: Deactivated successfully. Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.279319228Z" level=info msg="runSandbox: deleting pod ID a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3 from idIndex" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.279356274Z" level=info msg="runSandbox: removing pod sandbox a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.279386079Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.279411268Z" level=info msg="runSandbox: unmounting shmPath for sandbox a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3-userdata-shm.mount: Deactivated successfully. Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.286309391Z" level=info msg="runSandbox: removing pod sandbox from storage: a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.287887410Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:18.287915950Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=920dcdb1-1210-4122-85a6-d90720ac4765 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:18.288089 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:21:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:18.288151 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:21:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:18.288181 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:21:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:18.288240 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a9a0348f2ecbedf2215392f2891d47942fd782961a79cc3f8b63e2996663ebd3): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:21:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:22.216502 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:21:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:22.216912056Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:22.216977487Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:21:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:22.222805432Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/defe1a22-5d05-44a0-ade2-0be162bc4d4c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:21:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:22.222841762Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:21:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:24.872560 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:21:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:24.872621 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:21:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:26.292206 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:21:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:26.292537 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:21:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:26.292826 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:21:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:26.292865 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:21:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:31.216949 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:21:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:31.217299694Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:31.217364916Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:21:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:31.222823558Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/79ab03de-e9aa-4935-ae74-c52956c615a2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:21:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:31.222858576Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:21:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:33.217457 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:21:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:33.218037939Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:33.218097149Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:21:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:33.223677353Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/0fa42998-c774-411a-9d97-efc85e6332a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:21:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:33.223701910Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:21:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:34.872369 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:21:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:34.872419 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:21:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:34.872449 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:21:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:34.872950 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:21:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:34.873114 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c" gracePeriod=30 Feb 23 19:21:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:34.873384676Z" level=info msg="Stopping container: 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c (timeout: 30s)" id=40517db8-f95b-4c7e-8126-015e2fdac5dc name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:21:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:38.635972602Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=40517db8-f95b-4c7e-8126-015e2fdac5dc name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:21:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-d861a2f243e94a2e82a339a8c60443fb35ad5a55808837a3bbbaaab8a595aac6-merged.mount: Deactivated successfully. Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.412181654Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=40517db8-f95b-4c7e-8126-015e2fdac5dc name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.415050969Z" level=info msg="Stopped container 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=40517db8-f95b-4c7e-8126-015e2fdac5dc name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.415702516Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=71e44dd2-3df9-4ebd-b1b3-95f84fd9361e name=/runtime.v1.ImageService/ImageStatus Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.415880082Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=71e44dd2-3df9-4ebd-b1b3-95f84fd9361e name=/runtime.v1.ImageService/ImageStatus Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.416471039Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=3d0f25ee-58eb-491e-b439-229ceed574d1 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.416620829Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=3d0f25ee-58eb-491e-b439-229ceed574d1 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.417214655Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c10552b4-e435-4c7d-9c6a-782874e9e2c8 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.417365249Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.452167722Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=77673d7a-34d3-4664-8f80-0b98c732bcbe name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:21:42 ip-10-0-136-68 systemd[1]: Started crio-conmon-bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050.scope. Feb 23 19:21:42 ip-10-0-136-68 systemd[1]: Started libcontainer container bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050. Feb 23 19:21:42 ip-10-0-136-68 conmon[15467]: conmon bc27041f5e41e1ab35ed : Failed to write to cgroup.event_control Operation not supported Feb 23 19:21:42 ip-10-0-136-68 systemd[1]: crio-conmon-bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050.scope: Deactivated successfully. Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.537482191Z" level=info msg="Created container bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c10552b4-e435-4c7d-9c6a-782874e9e2c8 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.537898756Z" level=info msg="Starting container: bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" id=4d651d90-cf6b-4131-b7cf-4c8ee6236c69 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:21:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:42.545024038Z" level=info msg="Started container" PID=15478 containerID=bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=4d651d90-cf6b-4131-b7cf-4c8ee6236c69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:21:42 ip-10-0-136-68 systemd[1]: crio-bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050.scope: Deactivated successfully. Feb 23 19:21:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:46.203057991Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=0a0e0885-ed60-44ce-91d4-e6fecdff5b8e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:21:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:46.204085 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c" exitCode=-1 Feb 23 19:21:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:46.204127 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c} Feb 23 19:21:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:46.204158 2199 scope.go:115] "RemoveContainer" containerID="c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" Feb 23 19:21:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:49.954117222Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=cccf6681-ca6d-4db8-bc8c-0b43cc37f94a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:21:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:50.968225389Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=597dfbe6-0b81-4ef5-9f4c-f64ea8ca9b78 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.244998974Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.245058515Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 systemd[1]: run-utsns-29888820\x2d5b14\x2d4a74\x2db28a\x2da1632842b81f.mount: Deactivated successfully. Feb 23 19:21:51 ip-10-0-136-68 systemd[1]: run-ipcns-29888820\x2d5b14\x2d4a74\x2db28a\x2da1632842b81f.mount: Deactivated successfully. Feb 23 19:21:51 ip-10-0-136-68 systemd[1]: run-netns-29888820\x2d5b14\x2d4a74\x2db28a\x2da1632842b81f.mount: Deactivated successfully. Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.268402442Z" level=info msg="runSandbox: deleting pod ID dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c from idIndex" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.268440195Z" level=info msg="runSandbox: removing pod sandbox dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.268469517Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.268485401Z" level=info msg="runSandbox: unmounting shmPath for sandbox dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c-userdata-shm.mount: Deactivated successfully. Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.282317503Z" level=info msg="runSandbox: removing pod sandbox from storage: dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.284384565Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:51.284416753Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=e70412df-2ff0-42a5-a30e-df13dffcc2a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:51.284623 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:21:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:51.284691 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:21:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:51.284734 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:21:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:51.284821 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(dc06f61a862cee3e6d337a5e4c7b14f47454e484c8a98be39e6a9465a9f4d33c): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:21:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:53.717358691Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=1ca8e101-bc5c-4969-aceb-5b932418dbd2 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:21:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:53.717899645Z" level=info msg="Removing container: c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d" id=a7c8b15a-730f-4561-a308-242224a23af3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:21:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:54.729189495Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=52a2bc91-30bc-47aa-8230-d8a7aed6f9c4 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:21:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:54.730222 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050} Feb 23 19:21:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:54.872570 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:21:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:21:54.872625 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:21:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:56.234640329Z" level=info msg="NetworkStart: stopping network for sandbox 0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:21:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:56.234760755Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/42db341a-7f4b-4bac-8195-78830fa823eb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:21:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:56.234787891Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:21:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:56.234794999Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:21:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:56.234801800Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:21:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:56.292209 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:21:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:56.292462 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:21:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:56.292697 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:21:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:21:56.292726 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:21:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:57.468039687Z" level=warning msg="Failed to find container exit file for c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: timed out waiting for the condition" id=a7c8b15a-730f-4561-a308-242224a23af3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:21:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:21:57.492829139Z" level=info msg="Removed container c04fcb3de8db6fe843ab7c9284973fba9316ec7a530d9b4e63c81d2eef41a66d: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=a7c8b15a-730f-4561-a308-242224a23af3 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:22:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:01.484941400Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=773c1589-a29c-4daf-8130-7b81043ebe22 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:22:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:02.216910 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:02.217437596Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:02.217506528Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:02.222899990Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/19a706d0-4c44-45c3-b14c-38e656dfe2eb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:22:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:02.222927710Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:22:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:04.872616 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:22:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:04.872669 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:22:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:07.236150025Z" level=info msg="NetworkStart: stopping network for sandbox 53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:07.236305638Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/defe1a22-5d05-44a0-ade2-0be162bc4d4c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:22:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:07.236344675Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:22:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:07.236358562Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:22:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:07.236369871Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:22:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:14.872601 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:22:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:14.872665 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:22:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:16.234493717Z" level=info msg="NetworkStart: stopping network for sandbox 140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:16.234791196Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/79ab03de-e9aa-4935-ae74-c52956c615a2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:22:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:16.234830217Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:22:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:16.234842992Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:22:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:16.234853266Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:22:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:18.235990738Z" level=info msg="NetworkStart: stopping network for sandbox e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:18.236104452Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/0fa42998-c774-411a-9d97-efc85e6332a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:22:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:18.236131359Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:22:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:18.236139337Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:22:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:18.236146032Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:22:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:24.872844 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:22:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:24.872907 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:26.217204 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:26.217611 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:26.217862 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:26.217906 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:26.292519 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:26.292717 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:26.292950 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:26.292981 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:22:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:34.872280 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:22:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:34.872341 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:22:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:34.872368 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:22:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:34.872896 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:22:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:34.873055 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" gracePeriod=30 Feb 23 19:22:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:34.873312973Z" level=info msg="Stopping container: bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050 (timeout: 30s)" id=b4f964b0-d09e-4926-bdf9-f32b71c17db2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:22:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:38.634089596Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=b4f964b0-d09e-4926-bdf9-f32b71c17db2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:22:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-acd58e4096a0b6fe9952866f5057191f91807aa76c165b4463001ca8ab75acea-merged.mount: Deactivated successfully. Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.244190147Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.244274814Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 systemd[1]: run-utsns-42db341a\x2d7f4b\x2d4bac\x2d8195\x2d78830fa823eb.mount: Deactivated successfully. Feb 23 19:22:41 ip-10-0-136-68 systemd[1]: run-ipcns-42db341a\x2d7f4b\x2d4bac\x2d8195\x2d78830fa823eb.mount: Deactivated successfully. Feb 23 19:22:41 ip-10-0-136-68 systemd[1]: run-netns-42db341a\x2d7f4b\x2d4bac\x2d8195\x2d78830fa823eb.mount: Deactivated successfully. Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.266331598Z" level=info msg="runSandbox: deleting pod ID 0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132 from idIndex" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.266370478Z" level=info msg="runSandbox: removing pod sandbox 0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.266403090Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.266416566Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132-userdata-shm.mount: Deactivated successfully. Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.272332753Z" level=info msg="runSandbox: removing pod sandbox from storage: 0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.273991806Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:41.274021036Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4c209dac-78c6-4ec3-96cf-f727e5b8ba02 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:41.274222 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:22:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:41.274335 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:22:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:41.274369 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:22:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:41.274441 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0a512af85440d9d4fab75db72f38c19fc41221d7d5d0e86d3ac2de4c4569e132): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:42.409951797Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=b4f964b0-d09e-4926-bdf9-f32b71c17db2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:22:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:42.411658434Z" level=info msg="Stopped container bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b4f964b0-d09e-4926-bdf9-f32b71c17db2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:22:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:42.412144 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:22:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:43.308998320Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=0d2ed80d-dd08-416e-ab98-f10ccfe64f3d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:47.060444617Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=2766f636-c640-4194-8b99-c4f6f5aedb6e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:22:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:47.061332 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" exitCode=-1 Feb 23 19:22:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:47.061391 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050} Feb 23 19:22:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:47.061515 2199 scope.go:115] "RemoveContainer" containerID="16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c" Feb 23 19:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:47.235166455Z" level=info msg="NetworkStart: stopping network for sandbox caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:47.235318245Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/19a706d0-4c44-45c3-b14c-38e656dfe2eb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:47.235349844Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:47.235358080Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:22:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:47.235364942Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:22:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:48.062973 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:22:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:48.063377 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:22:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:50.809031416Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=7a9a2c6b-6f82-42c5-bfd8-99e64e2f5f21 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.246450112Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.246495672Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 systemd[1]: run-utsns-defe1a22\x2d5d05\x2d44a0\x2dade2\x2d0be162bc4d4c.mount: Deactivated successfully. Feb 23 19:22:52 ip-10-0-136-68 systemd[1]: run-ipcns-defe1a22\x2d5d05\x2d44a0\x2dade2\x2d0be162bc4d4c.mount: Deactivated successfully. Feb 23 19:22:52 ip-10-0-136-68 systemd[1]: run-netns-defe1a22\x2d5d05\x2d44a0\x2dade2\x2d0be162bc4d4c.mount: Deactivated successfully. Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.276333287Z" level=info msg="runSandbox: deleting pod ID 53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650 from idIndex" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.276371297Z" level=info msg="runSandbox: removing pod sandbox 53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.276413057Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.276427833Z" level=info msg="runSandbox: unmounting shmPath for sandbox 53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650-userdata-shm.mount: Deactivated successfully. Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.281298718Z" level=info msg="runSandbox: removing pod sandbox from storage: 53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.282762291Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:52.282791929Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=2f128d03-d5ec-4fcf-bff4-c42b7e6e185f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:52.282985 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:22:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:52.283037 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:22:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:52.283064 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:22:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:52.283122 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(53b49882bd1ac0788b1d7f10d392cae69a7708aa0f55322ec50b195f7cd07650): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:22:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:22:54.217137 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:22:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:54.217586636Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:22:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:54.217640757Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:22:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:54.223466618Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3a35d83b-b2fe-4120-a549-f4086ec99897 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:22:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:54.223503113Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:22:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:54.559040898Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=1b55323b-46f7-4615-8fff-b9f42c16bdc4 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:22:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:54.559637676Z" level=info msg="Removing container: 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c" id=7fc48fbc-cf5f-4070-8e71-2443c0551c43 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:56.291967 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:56.292269 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:56.292522 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:22:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:22:56.292552 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:22:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:58.320076847Z" level=warning msg="Failed to find container exit file for 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: timed out waiting for the condition" id=7fc48fbc-cf5f-4070-8e71-2443c0551c43 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:22:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:22:58.332836235Z" level=info msg="Removed container 16ea003db96e10fee523e37fb9ea5690cb5e551ed3cb9948ab7a60d89b4afe6c: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=7fc48fbc-cf5f-4070-8e71-2443c0551c43 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.245162720Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.245219202Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 systemd[1]: run-utsns-79ab03de\x2de9aa\x2d4935\x2dae74\x2dc52956c615a2.mount: Deactivated successfully. Feb 23 19:23:01 ip-10-0-136-68 systemd[1]: run-ipcns-79ab03de\x2de9aa\x2d4935\x2dae74\x2dc52956c615a2.mount: Deactivated successfully. Feb 23 19:23:01 ip-10-0-136-68 systemd[1]: run-netns-79ab03de\x2de9aa\x2d4935\x2dae74\x2dc52956c615a2.mount: Deactivated successfully. Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.268360971Z" level=info msg="runSandbox: deleting pod ID 140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7 from idIndex" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.268413136Z" level=info msg="runSandbox: removing pod sandbox 140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.268469583Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.268490280Z" level=info msg="runSandbox: unmounting shmPath for sandbox 140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7-userdata-shm.mount: Deactivated successfully. Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.274307714Z" level=info msg="runSandbox: removing pod sandbox from storage: 140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.275840027Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:01.275867489Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e3aea6f7-316a-4e13-9365-9129afea402c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:01.276084 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:23:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:01.276158 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:23:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:01.276199 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:23:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:01.276341 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(140ed249b246c75436d6a9f16f6f3e2edd1eed8d5017cf431916d4c19858dfe7): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:23:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:02.217157 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:23:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:02.217639 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:23:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:02.830947409Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=c0d3bea2-2b2d-4744-8ac8-5903794ef54c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.246190105Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.246239810Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 systemd[1]: run-utsns-0fa42998\x2dc774\x2d411a\x2d9d97\x2defc85e6332a4.mount: Deactivated successfully. Feb 23 19:23:03 ip-10-0-136-68 systemd[1]: run-ipcns-0fa42998\x2dc774\x2d411a\x2d9d97\x2defc85e6332a4.mount: Deactivated successfully. Feb 23 19:23:03 ip-10-0-136-68 systemd[1]: run-netns-0fa42998\x2dc774\x2d411a\x2d9d97\x2defc85e6332a4.mount: Deactivated successfully. Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.272338779Z" level=info msg="runSandbox: deleting pod ID e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff from idIndex" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.272378601Z" level=info msg="runSandbox: removing pod sandbox e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.272421991Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.272437940Z" level=info msg="runSandbox: unmounting shmPath for sandbox e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff-userdata-shm.mount: Deactivated successfully. Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.278297899Z" level=info msg="runSandbox: removing pod sandbox from storage: e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.279800983Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:03.279836128Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=471b214e-33f7-4a1c-8758-00f7df4166b7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:03.280048 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:23:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:03.280106 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:23:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:03.280138 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:23:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:03.280194 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e3b86f4af0e358c7d52d4af2f508e88c8c68c30a261ed4b99053285980ca88ff): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:23:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:06.217480 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:23:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:06.217914756Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:06.217987395Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:23:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:06.223344304Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/53f9bb57-1d7d-476c-afd6-7dd1e99ad064 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:23:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:06.223371343Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:23:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:14.216626 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:23:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:14.216720 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:23:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:14.217177 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:23:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:14.217153055Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:14.217218648Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:23:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:14.223080024Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f0e01649-3203-4568-9196-f8adbea5a325 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:23:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:14.223114384Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:23:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:16.216754 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:23:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:16.217205259Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:16.217666847Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:23:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:16.223312936Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7aedabe9-1795-4f04-b12a-7911053757a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:23:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:16.223348044Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:23:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:26.292640 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:26.292882 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:26.293156 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:26.293195 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:23:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:28.217015 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:23:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:28.217648 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.245533395Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.245588112Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 systemd[1]: run-utsns-19a706d0\x2d4c44\x2d45c3\x2db14c\x2d38e656dfe2eb.mount: Deactivated successfully. Feb 23 19:23:32 ip-10-0-136-68 systemd[1]: run-ipcns-19a706d0\x2d4c44\x2d45c3\x2db14c\x2d38e656dfe2eb.mount: Deactivated successfully. Feb 23 19:23:32 ip-10-0-136-68 systemd[1]: run-netns-19a706d0\x2d4c44\x2d45c3\x2db14c\x2d38e656dfe2eb.mount: Deactivated successfully. Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.279336235Z" level=info msg="runSandbox: deleting pod ID caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b from idIndex" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.279376738Z" level=info msg="runSandbox: removing pod sandbox caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.279415828Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.279431405Z" level=info msg="runSandbox: unmounting shmPath for sandbox caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b-userdata-shm.mount: Deactivated successfully. Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.285310089Z" level=info msg="runSandbox: removing pod sandbox from storage: caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.286844516Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:32.286876635Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=8b8505d2-10a8-4dbe-932e-fd9f5939eaae name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:32.287096 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:23:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:32.287163 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:23:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:32.287202 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:23:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:32.287325 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(caa92ce1439a45540036b67732ecfd7354dc15f38e27aded1e5a974359482e7b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:23:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:35.217686 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:35.217972 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:35.218232 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:35.218299 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:23:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:39.235790741Z" level=info msg="NetworkStart: stopping network for sandbox 0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:39.235910892Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3a35d83b-b2fe-4120-a549-f4086ec99897 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:23:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:39.235942173Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:23:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:39.235954061Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:23:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:39.235962362Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:23:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:42.216486 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:23:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:42.217073 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:23:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:47.216696 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:23:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:47.217098572Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:47.217163998Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:23:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:47.222319722Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/2fd5aa8d-967c-41d2-a45d-67c9b0548715 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:23:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:47.222346550Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:23:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:51.236621353Z" level=info msg="NetworkStart: stopping network for sandbox f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:51.236743538Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/53f9bb57-1d7d-476c-afd6-7dd1e99ad064 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:23:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:51.236782696Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:23:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:51.236792471Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:23:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:51.236799450Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:23:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:23:55.216717 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:23:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:55.217296 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:23:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:56.292014 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:56.292235 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:56.292461 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:23:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:23:56.292490 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:23:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:59.235374707Z" level=info msg="NetworkStart: stopping network for sandbox dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:23:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:59.235499530Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f0e01649-3203-4568-9196-f8adbea5a325 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:23:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:59.235530105Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:23:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:59.235538552Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:23:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:23:59.235545185Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:24:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:01.234516372Z" level=info msg="NetworkStart: stopping network for sandbox e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:01.234645328Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7aedabe9-1795-4f04-b12a-7911053757a4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:24:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:01.234674182Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:24:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:01.234684200Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:24:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:01.234695265Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:24:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:24:08.217143 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:24:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:08.217777 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:24:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:24:22.217221 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:24:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:22.217790 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.245736844Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.245781033Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 systemd[1]: run-utsns-3a35d83b\x2db2fe\x2d4120\x2da549\x2df4086ec99897.mount: Deactivated successfully. Feb 23 19:24:24 ip-10-0-136-68 systemd[1]: run-ipcns-3a35d83b\x2db2fe\x2d4120\x2da549\x2df4086ec99897.mount: Deactivated successfully. Feb 23 19:24:24 ip-10-0-136-68 systemd[1]: run-netns-3a35d83b\x2db2fe\x2d4120\x2da549\x2df4086ec99897.mount: Deactivated successfully. Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.269322847Z" level=info msg="runSandbox: deleting pod ID 0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8 from idIndex" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.269353169Z" level=info msg="runSandbox: removing pod sandbox 0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.269376173Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.269390405Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8-userdata-shm.mount: Deactivated successfully. Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.274322872Z" level=info msg="runSandbox: removing pod sandbox from storage: 0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.275925041Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:24.275952975Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=f762da1e-04ca-4ff0-b40b-83fcd6d951f1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:24.276129 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:24:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:24.276177 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:24:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:24.276203 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:24:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:24.276276 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(0b555befe5aa7de81917acdfc0f1b18697a1105ee48af309594d4f31ce935fe8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:26.292137 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:26.292434 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:26.292714 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:26.292746 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:24:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:32.233841922Z" level=info msg="NetworkStart: stopping network for sandbox e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:32.233958958Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/2fd5aa8d-967c-41d2-a45d-67c9b0548715 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:24:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:32.233987959Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:24:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:32.233997149Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:24:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:32.234009047Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:24:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:24:36.216848 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.217289944Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.217353452Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.222900200Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/e41bf261-189f-45af-8c88-681f29975e49 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.222923723Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.246828958Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.246869339Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 systemd[1]: run-utsns-53f9bb57\x2d1d7d\x2d476c\x2dafd6\x2d7dd1e99ad064.mount: Deactivated successfully. Feb 23 19:24:36 ip-10-0-136-68 systemd[1]: run-ipcns-53f9bb57\x2d1d7d\x2d476c\x2dafd6\x2d7dd1e99ad064.mount: Deactivated successfully. Feb 23 19:24:36 ip-10-0-136-68 systemd[1]: run-netns-53f9bb57\x2d1d7d\x2d476c\x2dafd6\x2d7dd1e99ad064.mount: Deactivated successfully. Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.274325552Z" level=info msg="runSandbox: deleting pod ID f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da from idIndex" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.274357048Z" level=info msg="runSandbox: removing pod sandbox f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.274389424Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.274408318Z" level=info msg="runSandbox: unmounting shmPath for sandbox f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.278302179Z" level=info msg="runSandbox: removing pod sandbox from storage: f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.279751050Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:36.279777079Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ee5166f9-dd69-4328-ac91-69376d2ccdb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:36.279937 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:24:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:36.279982 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:24:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:36.280007 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:24:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:36.280059 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:24:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:24:37.217043 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:24:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:37.217615 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:24:37 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f80d67071902742d39801f2564ab6f3b056865d97863d12757548b863e3956da-userdata-shm.mount: Deactivated successfully. Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.245582598Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.245630729Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 systemd[1]: run-utsns-f0e01649\x2d3203\x2d4568\x2d9196\x2df8adbea5a325.mount: Deactivated successfully. Feb 23 19:24:44 ip-10-0-136-68 systemd[1]: run-ipcns-f0e01649\x2d3203\x2d4568\x2d9196\x2df8adbea5a325.mount: Deactivated successfully. Feb 23 19:24:44 ip-10-0-136-68 systemd[1]: run-netns-f0e01649\x2d3203\x2d4568\x2d9196\x2df8adbea5a325.mount: Deactivated successfully. Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.269328241Z" level=info msg="runSandbox: deleting pod ID dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900 from idIndex" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.269370610Z" level=info msg="runSandbox: removing pod sandbox dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.269398855Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.269414483Z" level=info msg="runSandbox: unmounting shmPath for sandbox dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900-userdata-shm.mount: Deactivated successfully. Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.274326833Z" level=info msg="runSandbox: removing pod sandbox from storage: dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.275858628Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:44.275891187Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=f3305eb3-2c3a-4cbd-b4d8-8af71d10e0b0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:44.276118 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:24:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:44.276193 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:24:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:44.276231 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:24:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:44.276339 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dcfb458be17e27d238cd4556f505f49b14466885098da1167abd56c3c0d49900): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.243631561Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.243685756Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 systemd[1]: run-utsns-7aedabe9\x2d1795\x2d4f04\x2db12a\x2d7911053757a4.mount: Deactivated successfully. Feb 23 19:24:46 ip-10-0-136-68 systemd[1]: run-ipcns-7aedabe9\x2d1795\x2d4f04\x2db12a\x2d7911053757a4.mount: Deactivated successfully. Feb 23 19:24:46 ip-10-0-136-68 systemd[1]: run-netns-7aedabe9\x2d1795\x2d4f04\x2db12a\x2d7911053757a4.mount: Deactivated successfully. Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.270343463Z" level=info msg="runSandbox: deleting pod ID e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae from idIndex" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.270387630Z" level=info msg="runSandbox: removing pod sandbox e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.270430990Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.270444493Z" level=info msg="runSandbox: unmounting shmPath for sandbox e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae-userdata-shm.mount: Deactivated successfully. Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.275306630Z" level=info msg="runSandbox: removing pod sandbox from storage: e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.276811347Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:46.276843814Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=a621f038-c1a3-4ab4-baec-fce3c32686da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:46.277038 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:24:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:46.277093 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:24:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:46.277127 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:24:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:46.277186 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e67140d481b28667a95fe804867fb36dbe5a7c2c4559f3f100db1b9030f38bae): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:24:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:24:48.217232 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:24:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:48.217280 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:48.218321 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:24:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:48.218499 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:48.218721 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:48.218754 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:24:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:24:49.216880 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:24:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:49.217260628Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:49.217325590Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:24:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:49.222645731Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8ca40633-302c-4122-9c63-8a6d0b85c631 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:24:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:49.222678745Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:24:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:24:56.216882 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:24:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:56.217339615Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:24:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:56.217408290Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:24:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:56.226448432Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/d5aceebe-5ed9-426e-b1b7-f781405f575c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:24:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:24:56.226477146Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:24:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:56.292423 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:56.292680 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:56.292959 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:24:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:24:56.293003 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:25:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:25:00.217667 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:25:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:00.218228 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:25:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:25:01.216556 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:25:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:01.216919798Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:01.216989480Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:25:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:01.222496331Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/14c011fe-d7ae-41fe-bade-37b0a08e5188 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:25:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:01.222524529Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:25:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:25:14.216560 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:25:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:14.216932 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.243487969Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.243548515Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 systemd[1]: run-utsns-2fd5aa8d\x2d967c\x2d41d2\x2da45d\x2d67c9b0548715.mount: Deactivated successfully. Feb 23 19:25:17 ip-10-0-136-68 systemd[1]: run-ipcns-2fd5aa8d\x2d967c\x2d41d2\x2da45d\x2d67c9b0548715.mount: Deactivated successfully. Feb 23 19:25:17 ip-10-0-136-68 systemd[1]: run-netns-2fd5aa8d\x2d967c\x2d41d2\x2da45d\x2d67c9b0548715.mount: Deactivated successfully. Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.257326239Z" level=info msg="runSandbox: deleting pod ID e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29 from idIndex" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.257366348Z" level=info msg="runSandbox: removing pod sandbox e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.257398051Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.257410750Z" level=info msg="runSandbox: unmounting shmPath for sandbox e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29-userdata-shm.mount: Deactivated successfully. Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.264336148Z" level=info msg="runSandbox: removing pod sandbox from storage: e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.265952989Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:17.265989630Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=f96273ed-66b7-46ec-973e-298ead2dc106 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:17.266225 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:25:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:17.266322 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:25:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:17.266348 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:25:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:17.266424 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e1058e48894e76e111f94e3504aafd90c05db86195d857ddfcc67db6b63aac29): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:25:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:20.227320596Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=26dc133a-8708-4583-92d8-48ba13cb2294 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:25:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:20.227539716Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=26dc133a-8708-4583-92d8-48ba13cb2294 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:25:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:21.234054003Z" level=info msg="NetworkStart: stopping network for sandbox f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:21.234175063Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/e41bf261-189f-45af-8c88-681f29975e49 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:25:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:21.234204235Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:25:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:21.234215601Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:25:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:21.234225085Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:25:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:26.292459 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:25:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:26.292714 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:25:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:26.292961 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:25:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:26.292984 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:25:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:25:27.216707 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:25:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:27.217120 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:25:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:25:29.216914 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:25:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:29.217358147Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:29.217423829Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:25:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:29.222528516Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/4150e7a5-b126-4c11-a258-5ff51164dfa9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:25:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:29.222565226Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:25:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:34.234338264Z" level=info msg="NetworkStart: stopping network for sandbox ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:34.234463639Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8ca40633-302c-4122-9c63-8a6d0b85c631 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:25:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:34.234512100Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:25:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:34.234522793Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:25:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:34.234533110Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:25:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:25:40.217005 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:25:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:40.217451 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:25:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:41.239102827Z" level=info msg="NetworkStart: stopping network for sandbox 813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:41.239263106Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/d5aceebe-5ed9-426e-b1b7-f781405f575c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:25:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:41.239304537Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:25:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:41.239316960Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:25:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:41.239329220Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:25:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:46.233855849Z" level=info msg="NetworkStart: stopping network for sandbox 316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:25:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:46.233960299Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/14c011fe-d7ae-41fe-bade-37b0a08e5188 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:25:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:46.233989191Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:25:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:46.233997151Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:25:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:25:46.234004182Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:25:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:25:53.216542 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:25:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:53.216934 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:25:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:56.292336 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:25:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:56.292615 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:25:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:56.292829 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:25:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:25:56.292863 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.244215722Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.244299243Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 systemd[1]: run-utsns-e41bf261\x2d189f\x2d45af\x2d8c88\x2d681f29975e49.mount: Deactivated successfully. Feb 23 19:26:06 ip-10-0-136-68 systemd[1]: run-ipcns-e41bf261\x2d189f\x2d45af\x2d8c88\x2d681f29975e49.mount: Deactivated successfully. Feb 23 19:26:06 ip-10-0-136-68 systemd[1]: run-netns-e41bf261\x2d189f\x2d45af\x2d8c88\x2d681f29975e49.mount: Deactivated successfully. Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.271339170Z" level=info msg="runSandbox: deleting pod ID f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f from idIndex" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.271384701Z" level=info msg="runSandbox: removing pod sandbox f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.271427322Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.271446826Z" level=info msg="runSandbox: unmounting shmPath for sandbox f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f-userdata-shm.mount: Deactivated successfully. Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.285335274Z" level=info msg="runSandbox: removing pod sandbox from storage: f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.287057041Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:06.287086562Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=21e44641-5656-4822-b896-a9102e86931c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:06.287317 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:26:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:06.287381 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:26:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:06.287408 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:26:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:06.287464 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(f78f01492ef626af988bf67a6065c8fa870f524a5c5aed39c32d272ffd89eb3f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:26:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:26:08.217509 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:26:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:08.218104 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:26:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:14.235652154Z" level=info msg="NetworkStart: stopping network for sandbox e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:14.235797637Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/4150e7a5-b126-4c11-a258-5ff51164dfa9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:26:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:14.235833862Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:26:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:14.235844195Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:26:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:14.235855008Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:26:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:17.217399 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:17.217690 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:17.217928 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:17.217956 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:26:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:26:18.216571 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:26:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:18.216999897Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:18.217061865Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:26:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:18.222521375Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/17287231-182a-4a70-8754-2fc2969e0c5c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:26:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:18.222545238Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.243407052Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.243465694Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 systemd[1]: run-utsns-8ca40633\x2d302c\x2d4122\x2d9c63\x2d8a6d0b85c631.mount: Deactivated successfully. Feb 23 19:26:19 ip-10-0-136-68 systemd[1]: run-ipcns-8ca40633\x2d302c\x2d4122\x2d9c63\x2d8a6d0b85c631.mount: Deactivated successfully. Feb 23 19:26:19 ip-10-0-136-68 systemd[1]: run-netns-8ca40633\x2d302c\x2d4122\x2d9c63\x2d8a6d0b85c631.mount: Deactivated successfully. Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.259333149Z" level=info msg="runSandbox: deleting pod ID ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec from idIndex" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.259369137Z" level=info msg="runSandbox: removing pod sandbox ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.259398153Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.259412381Z" level=info msg="runSandbox: unmounting shmPath for sandbox ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec-userdata-shm.mount: Deactivated successfully. Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.265313604Z" level=info msg="runSandbox: removing pod sandbox from storage: ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.266823860Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:19.266855175Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=879ae1e9-3c72-4630-a16e-b68df56cdbcf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:19.267059 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:26:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:19.267116 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:26:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:19.267141 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:26:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:19.267205 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(ab7d8f8787228b74be4b6aee28f8f5c49ec2c976fc1be745086f53d8efe67cec): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:26:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:26:22.217190 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:26:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:22.217841 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.248944933Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.248988064Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 systemd[1]: run-utsns-d5aceebe\x2d5ed9\x2d426e\x2db1b7\x2df781405f575c.mount: Deactivated successfully. Feb 23 19:26:26 ip-10-0-136-68 systemd[1]: run-ipcns-d5aceebe\x2d5ed9\x2d426e\x2db1b7\x2df781405f575c.mount: Deactivated successfully. Feb 23 19:26:26 ip-10-0-136-68 systemd[1]: run-netns-d5aceebe\x2d5ed9\x2d426e\x2db1b7\x2df781405f575c.mount: Deactivated successfully. Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.265328908Z" level=info msg="runSandbox: deleting pod ID 813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7 from idIndex" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.265357014Z" level=info msg="runSandbox: removing pod sandbox 813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.265379710Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.265400082Z" level=info msg="runSandbox: unmounting shmPath for sandbox 813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7-userdata-shm.mount: Deactivated successfully. Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.275291736Z" level=info msg="runSandbox: removing pod sandbox from storage: 813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.276694820Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:26.276726359Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=562884d2-bb4b-428c-bc05-d9ac2d300f40 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:26.276916 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:26.276966 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:26.276988 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:26.277036 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(813d20a79a79029dc69ab6705874b70dde79cb73911aeae4b015182d20abd1c7): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:26.292508 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:26.292762 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:26.292993 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:26.293022 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:26:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:26:31.216331 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.216631872Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.216691390Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.222042276Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/03be6f12-eac0-42c1-8880-96b78672fe27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.222067296Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.242973921Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.243009890Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 systemd[1]: run-utsns-14c011fe\x2dd7ae\x2d41fe\x2dbade\x2d37b0a08e5188.mount: Deactivated successfully. Feb 23 19:26:31 ip-10-0-136-68 systemd[1]: run-ipcns-14c011fe\x2dd7ae\x2d41fe\x2dbade\x2d37b0a08e5188.mount: Deactivated successfully. Feb 23 19:26:31 ip-10-0-136-68 systemd[1]: run-netns-14c011fe\x2dd7ae\x2d41fe\x2dbade\x2d37b0a08e5188.mount: Deactivated successfully. Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.267322446Z" level=info msg="runSandbox: deleting pod ID 316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a from idIndex" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.267355555Z" level=info msg="runSandbox: removing pod sandbox 316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.267378348Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.267391657Z" level=info msg="runSandbox: unmounting shmPath for sandbox 316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.288293612Z" level=info msg="runSandbox: removing pod sandbox from storage: 316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.289758338Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:31.289788235Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=1301922b-c577-4868-8ac2-1b11c122b196 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:31.290223 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:26:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:31.290307 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:26:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:31.290332 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:26:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:31.290402 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:26:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-316ab40a9099dcb6fb1099a7891cd7f8500be1fefc1e56c11cad1221edfa4b4a-userdata-shm.mount: Deactivated successfully. Feb 23 19:26:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:26:37.216410 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:26:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:37.216798 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:26:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:26:41.216839 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:26:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:41.217154340Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:41.217203694Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:26:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:41.222684888Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/8d173777-00e0-47ff-8c4b-f98cecc1d30e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:26:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:41.222708938Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:26:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:26:44.217098 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:26:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:44.217516507Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:44.217597270Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:26:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:44.223348825Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/569a6ede-4ce8-470f-8b89-a2faaf250e0e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:26:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:44.223371861Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:26:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:26:49.217365 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:26:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:49.217763 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:26:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:56.292698 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:56.292966 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:56.293149 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:26:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:56.293186 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.245926441Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.245976869Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 systemd[1]: run-utsns-4150e7a5\x2db126\x2d4c11\x2da258\x2d5ff51164dfa9.mount: Deactivated successfully. Feb 23 19:26:59 ip-10-0-136-68 systemd[1]: run-ipcns-4150e7a5\x2db126\x2d4c11\x2da258\x2d5ff51164dfa9.mount: Deactivated successfully. Feb 23 19:26:59 ip-10-0-136-68 systemd[1]: run-netns-4150e7a5\x2db126\x2d4c11\x2da258\x2d5ff51164dfa9.mount: Deactivated successfully. Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.262341059Z" level=info msg="runSandbox: deleting pod ID e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381 from idIndex" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.262375575Z" level=info msg="runSandbox: removing pod sandbox e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.262401707Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.262414052Z" level=info msg="runSandbox: unmounting shmPath for sandbox e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381-userdata-shm.mount: Deactivated successfully. Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.278303771Z" level=info msg="runSandbox: removing pod sandbox from storage: e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.279818584Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:26:59.279848588Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=86c2a011-5c5e-42c2-8093-42e13411459a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:26:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:59.280061 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:26:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:59.280119 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:26:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:59.280145 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:26:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:26:59.280208 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(e7edf59f738b6c28e058c40f99bccff5819baf44ebc7242dac8a0f5a88c9d381): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:27:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:03.234683192Z" level=info msg="NetworkStart: stopping network for sandbox 9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:03.234804968Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/17287231-182a-4a70-8754-2fc2969e0c5c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:27:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:03.234842323Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:27:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:03.234853474Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:27:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:03.234862653Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:27:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:27:04.216724 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:27:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:04.217318 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:27:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:27:12.217206 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:27:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:12.217693160Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:12.217770951Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:27:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:12.223464102Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/c0f30fbb-aaae-4ef1-8c58-a7edefa7e7cb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:27:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:12.223500069Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:27:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:27:15.217129 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:27:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:15.217735 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:27:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:16.235213184Z" level=info msg="NetworkStart: stopping network for sandbox 9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:16.235365633Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/03be6f12-eac0-42c1-8880-96b78672fe27 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:27:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:16.235406851Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:27:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:16.235417487Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:27:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:16.235427190Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:27:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:26.233987466Z" level=info msg="NetworkStart: stopping network for sandbox e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:26.234099103Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/8d173777-00e0-47ff-8c4b-f98cecc1d30e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:27:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:26.234127635Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:27:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:26.234135365Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:27:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:26.234159255Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:26.292356 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:26.292571 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:26.292842 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:26.292872 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:27:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:27:27.217027 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:27:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:27.217507 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:27:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:29.234954398Z" level=info msg="NetworkStart: stopping network for sandbox 6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:29.235077475Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/569a6ede-4ce8-470f-8b89-a2faaf250e0e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:27:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:29.235105313Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:27:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:29.235112949Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:27:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:29.235119517Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:27:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:27:41.216623 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:27:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:41.216834 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:41.217106 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:41.217349 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:41.217383 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.217496664Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=1c633022-2542-40ab-8ad0-11b297c576d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.217711591Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=1c633022-2542-40ab-8ad0-11b297c576d9 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.218305779Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=39fa68c5-96b2-4099-90c0-31864a1c40b1 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.218447760Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=39fa68c5-96b2-4099-90c0-31864a1c40b1 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.219039078Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=24c367b9-d0a4-4fcf-a535-656e0571d05b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.219142081Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:27:41 ip-10-0-136-68 systemd[1]: Started crio-conmon-a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330.scope. Feb 23 19:27:41 ip-10-0-136-68 systemd[1]: Started libcontainer container a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330. Feb 23 19:27:41 ip-10-0-136-68 conmon[16138]: conmon a1f9b41ad75b0c66c060 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:27:41 ip-10-0-136-68 systemd[1]: crio-conmon-a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330.scope: Deactivated successfully. Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.364366624Z" level=info msg="Created container a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=24c367b9-d0a4-4fcf-a535-656e0571d05b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.364899404Z" level=info msg="Starting container: a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330" id=685e5009-6068-4597-9a51-838d1ed3f0dc name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:27:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:41.372106320Z" level=info msg="Started container" PID=16150 containerID=a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=685e5009-6068-4597-9a51-838d1ed3f0dc name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:27:41 ip-10-0-136-68 systemd[1]: crio-a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330.scope: Deactivated successfully. Feb 23 19:27:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:46.023465477Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=715f368f-b4ba-4793-81e6-027ae386ad2d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:27:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:27:46.024485 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330} Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.243986100Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.244034624Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 systemd[1]: run-utsns-17287231\x2d182a\x2d4a70\x2d8754\x2d2fc2969e0c5c.mount: Deactivated successfully. Feb 23 19:27:48 ip-10-0-136-68 systemd[1]: run-ipcns-17287231\x2d182a\x2d4a70\x2d8754\x2d2fc2969e0c5c.mount: Deactivated successfully. Feb 23 19:27:48 ip-10-0-136-68 systemd[1]: run-netns-17287231\x2d182a\x2d4a70\x2d8754\x2d2fc2969e0c5c.mount: Deactivated successfully. Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.267320898Z" level=info msg="runSandbox: deleting pod ID 9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e from idIndex" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.267354531Z" level=info msg="runSandbox: removing pod sandbox 9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.267381551Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.267393664Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e-userdata-shm.mount: Deactivated successfully. Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.272312497Z" level=info msg="runSandbox: removing pod sandbox from storage: 9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.273953951Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:48.273987024Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=5656eb16-4db8-4c7b-acf7-1d9e9ba9b74f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:48.274180 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:27:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:48.274228 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:27:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:48.274281 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:27:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:48.274353 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9bf5519e8be044b0df7076eb74105cdc15763b09718138251aba4c15d30c222e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:27:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:27:54.872264 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:27:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:27:54.872333 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:27:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:56.291751 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:56.291995 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:56.292233 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:27:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:27:56.292302 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:27:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:57.235619901Z" level=info msg="NetworkStart: stopping network for sandbox 92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:27:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:57.235744276Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/c0f30fbb-aaae-4ef1-8c58-a7edefa7e7cb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:27:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:57.235773187Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:27:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:57.235783268Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:27:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:27:57.235793147Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:28:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:01.217228 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.217607118Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.217660808Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.222999552Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/92d72f5f-11c8-481f-ac99-b75b0815ebe7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.223027712Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.245328095Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.245361244Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 systemd[1]: run-utsns-03be6f12\x2deac0\x2d42c1\x2d8880\x2d96b78672fe27.mount: Deactivated successfully. Feb 23 19:28:01 ip-10-0-136-68 systemd[1]: run-ipcns-03be6f12\x2deac0\x2d42c1\x2d8880\x2d96b78672fe27.mount: Deactivated successfully. Feb 23 19:28:01 ip-10-0-136-68 systemd[1]: run-netns-03be6f12\x2deac0\x2d42c1\x2d8880\x2d96b78672fe27.mount: Deactivated successfully. Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.270322232Z" level=info msg="runSandbox: deleting pod ID 9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267 from idIndex" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.270367633Z" level=info msg="runSandbox: removing pod sandbox 9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.270392303Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.270404566Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.288306080Z" level=info msg="runSandbox: removing pod sandbox from storage: 9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.289845594Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:01.289872410Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=fdf36eea-6382-42f1-a858-c9a2decd435e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:01.290086 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:28:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:01.290140 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:28:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:01.290165 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:28:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:01.290219 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:28:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9e61df7cdeee3b7c37e779a615992276a537ac1726a6160cb64e419d2c726267-userdata-shm.mount: Deactivated successfully. Feb 23 19:28:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:04.873012 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:28:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:04.873066 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.244094214Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.244149948Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 systemd[1]: run-utsns-8d173777\x2d00e0\x2d47ff\x2d8c4b\x2df98cecc1d30e.mount: Deactivated successfully. Feb 23 19:28:11 ip-10-0-136-68 systemd[1]: run-ipcns-8d173777\x2d00e0\x2d47ff\x2d8c4b\x2df98cecc1d30e.mount: Deactivated successfully. Feb 23 19:28:11 ip-10-0-136-68 systemd[1]: run-netns-8d173777\x2d00e0\x2d47ff\x2d8c4b\x2df98cecc1d30e.mount: Deactivated successfully. Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.263322017Z" level=info msg="runSandbox: deleting pod ID e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0 from idIndex" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.263357789Z" level=info msg="runSandbox: removing pod sandbox e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.263388377Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.263400922Z" level=info msg="runSandbox: unmounting shmPath for sandbox e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0-userdata-shm.mount: Deactivated successfully. Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.269323577Z" level=info msg="runSandbox: removing pod sandbox from storage: e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.270898065Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:11.270928519Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=1bd592b4-cb1e-43fa-afbd-efdfa914f94d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:11.271119 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:28:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:11.271329 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:28:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:11.271367 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:28:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:11.271446 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e00cc3b28dacf8fd54644834b4be8a15bb8948b3a353462a43bc9467eceddce0): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.245190334Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.245270826Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 systemd[1]: run-utsns-569a6ede\x2d4ce8\x2d470f\x2d8b89\x2da2faaf250e0e.mount: Deactivated successfully. Feb 23 19:28:14 ip-10-0-136-68 systemd[1]: run-ipcns-569a6ede\x2d4ce8\x2d470f\x2d8b89\x2da2faaf250e0e.mount: Deactivated successfully. Feb 23 19:28:14 ip-10-0-136-68 systemd[1]: run-netns-569a6ede\x2d4ce8\x2d470f\x2d8b89\x2da2faaf250e0e.mount: Deactivated successfully. Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.261350584Z" level=info msg="runSandbox: deleting pod ID 6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602 from idIndex" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.261390429Z" level=info msg="runSandbox: removing pod sandbox 6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.261420547Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.261435736Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602-userdata-shm.mount: Deactivated successfully. Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.268307135Z" level=info msg="runSandbox: removing pod sandbox from storage: 6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.269827020Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:14.269858252Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=560faafe-8e47-4c28-a26b-8d30255887ed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:14.270074 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:28:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:14.270145 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:28:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:14.270204 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:28:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:14.270338 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(6fd732e80bca6b602f272b608ca4cd6b1acc278c3cf18c213fed81b192f04602): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:28:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:14.872524 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:28:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:14.872583 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:28:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:16.216498 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:28:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:16.217306080Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:16.217375362Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:28:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:16.223194546Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/cb23f9b4-80a3-4924-94ec-c2e4a130616e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:28:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:16.223228287Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:28:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:24.217017 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:28:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:24.217467602Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:24.217534410Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:28:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:24.223480389Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/cbd25aae-e54d-403d-838c-23225952370f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:28:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:24.223517368Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:28:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:24.872613 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:28:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:24.872674 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:28:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:26.292022 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:26.292343 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:26.292567 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:26.292610 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.390123 2199 kubelet.go:2219] "SyncLoop ADD" source="api" pods="[openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug]" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.390164 2199 topology_manager.go:210] "Topology Admit Handler" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:28.390215 2199 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="faee5572-050c-4fe2-b0a1-1aa9ae48ce75" containerName="container-00" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.390224 2199 state_mem.go:107] "Deleted CPUSet assignment" podUID="faee5572-050c-4fe2-b0a1-1aa9ae48ce75" containerName="container-00" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.390272 2199 memory_manager.go:346] "RemoveStaleState removing state" podUID="faee5572-050c-4fe2-b0a1-1aa9ae48ce75" containerName="container-00" Feb 23 19:28:28 ip-10-0-136-68 systemd[1]: Created slice libcontainer container kubepods-besteffort-pod18fdfb87_d654_42fc_9d35_d7db9b65ab35.slice. Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.554393 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18fdfb87-d654-42fc-9d35-d7db9b65ab35-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\") " pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.554461 2199 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbkl5\" (UniqueName: \"kubernetes.io/projected/18fdfb87-d654-42fc-9d35-d7db9b65ab35-kube-api-access-qbkl5\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\") " pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.655411 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18fdfb87-d654-42fc-9d35-d7db9b65ab35-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\") " pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.655470 2199 reconciler_common.go:228] "operationExecutor.MountVolume started for volume \"kube-api-access-qbkl5\" (UniqueName: \"kubernetes.io/projected/18fdfb87-d654-42fc-9d35-d7db9b65ab35-kube-api-access-qbkl5\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\") " pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.655553 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18fdfb87-d654-42fc-9d35-d7db9b65ab35-host\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\") " pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.672980 2199 operation_generator.go:730] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbkl5\" (UniqueName: \"kubernetes.io/projected/18fdfb87-d654-42fc-9d35-d7db9b65ab35-kube-api-access-qbkl5\") pod \"ip-10-0-136-68us-west-2computeinternal-debug\" (UID: \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\") " pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 19:28:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:28.706978 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.707444724Z" level=info msg="Running pod sandbox: openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug/POD" id=4f80fd9a-3293-4292-b7c7-0b5a941febed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.707510360Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.710904708Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ping_group_range 0 2147483647}: \"net.ipv4.ping_group_range\" not allowed with host net enabled" id=4f80fd9a-3293-4292-b7c7-0b5a941febed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.713663867Z" level=info msg="Ran pod sandbox ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b with infra container: openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug/POD" id=4f80fd9a-3293-4292-b7c7-0b5a941febed name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.714458118Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12" id=67a801a2-1e4e-459a-94a4-eef02a5b2af4 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.714624710Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:25563a58e011c8f5e5ce0ad0855a11a739335cfafef29c46935ce1be3de8dd03,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12],Size_:792105820,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=67a801a2-1e4e-459a-94a4-eef02a5b2af4 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.715148107Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12" id=702799d0-f677-4edf-aa46-d62f6d62e3e0 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.715329624Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:25563a58e011c8f5e5ce0ad0855a11a739335cfafef29c46935ce1be3de8dd03,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12],Size_:792105820,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=702799d0-f677-4edf-aa46-d62f6d62e3e0 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.715830310Z" level=info msg="Creating container: openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=2a14259f-69a2-4b95-878e-7cea4091a19e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:28:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:28.715929338Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:28:28 ip-10-0-136-68 systemd[1]: Started crio-conmon-6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44.scope. Feb 23 19:28:28 ip-10-0-136-68 systemd[1]: Started libcontainer container 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44. Feb 23 19:28:28 ip-10-0-136-68 conmon[16267]: conmon 6d20420ea35b9cc04246 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:28:28 ip-10-0-136-68 systemd[1]: crio-conmon-6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44.scope: Deactivated successfully. Feb 23 19:28:28 ip-10-0-136-68 systemd[1]: crio-6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44.scope: Deactivated successfully. Feb 23 19:28:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:29.094283 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:18fdfb87-d654-42fc-9d35-d7db9b65ab35 Type:ContainerStarted Data:ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b} Feb 23 19:28:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:29.216610 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:28:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:29.217001356Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:29.217109273Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:28:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:29.225511489Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/ba37bef1-7822-4f72-9b12-724846fff582 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:28:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:29.225536778Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:28:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:32.542946116Z" level=warning msg="Failed to find container exit file for 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: timed out waiting for the condition" id=2a14259f-69a2-4b95-878e-7cea4091a19e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:28:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:32.563158890Z" level=info msg="Created container 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=2a14259f-69a2-4b95-878e-7cea4091a19e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:28:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:32.563610191Z" level=info msg="Starting container: 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44" id=c1fb69fd-406d-44a9-8dcf-c3cd3d7d76a4 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:28:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:32.563809 2199 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = container 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44 is not in created state: stopped" containerID="6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44" Feb 23 19:28:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:32.563916 2199 kuberuntime_manager.go:872] container &Container{Name:container-00,Image:registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:a4b875c6248aece334139248c2da87359b6ec3961d701b6643ace31db1b51d12,Command:[/bin/sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qbkl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod ip-10-0-136-68us-west-2computeinternal-debug_openshift-debug-n5lxf(18fdfb87-d654-42fc-9d35-d7db9b65ab35): RunContainerError: container 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44 is not in created state: stopped Feb 23 19:28:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:32.563949 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with RunContainerError: \"container 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44 is not in created state: stopped\"" pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" podUID=18fdfb87-d654-42fc-9d35-d7db9b65ab35 Feb 23 19:28:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:34.873008 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:28:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:34.873065 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:28:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:34.873094 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:28:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:34.873620 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:28:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:34.873781 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330" gracePeriod=30 Feb 23 19:28:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:34.874018550Z" level=info msg="Stopping container: a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330 (timeout: 30s)" id=41d4371b-4965-4be1-9e7c-a544eea9bb37 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:28:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:36.908932624Z" level=warning msg="Failed to find container exit file for 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: timed out waiting for the condition" id=5d27bcbc-0c14-44f8-9816-4c2236bf112e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:36.915520 2199 generic.go:332] "Generic (PLEG): container finished" podID=18fdfb87-d654-42fc-9d35-d7db9b65ab35 containerID="6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44" exitCode=-1 Feb 23 19:28:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:36.915564 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:18fdfb87-d654-42fc-9d35-d7db9b65ab35 Type:ContainerDied Data:6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44} Feb 23 19:28:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:37.917865686Z" level=info msg="Stopping pod sandbox: ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b" id=e7bbb097-04b1-4d10-bd4d-a3670a2b64e4 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 19:28:37 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-207f70ddb8bbd778efa06a7b91efedf36ef86e2c321d38f52d322691dcbf0532-merged.mount: Deactivated successfully. Feb 23 19:28:37 ip-10-0-136-68 systemd[1]: run-utsns-194e53cc\x2d67b7\x2d458f\x2da53e\x2d448b8d126cdc.mount: Deactivated successfully. Feb 23 19:28:37 ip-10-0-136-68 systemd[1]: run-ipcns-194e53cc\x2d67b7\x2d458f\x2da53e\x2d448b8d126cdc.mount: Deactivated successfully. Feb 23 19:28:37 ip-10-0-136-68 systemd[1]: run-netns-194e53cc\x2d67b7\x2d458f\x2da53e\x2d448b8d126cdc.mount: Deactivated successfully. Feb 23 19:28:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:37.953311963Z" level=info msg="Stopped pod sandbox: ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b" id=e7bbb097-04b1-4d10-bd4d-a3670a2b64e4 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 19:28:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:38.635971981Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=41d4371b-4965-4be1-9e7c-a544eea9bb37 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:28:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b220240b1779eca9c3b2d4297a203df59a83a2472f79c2629f4322a5041093e7-merged.mount: Deactivated successfully. Feb 23 19:28:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:40.758966440Z" level=warning msg="Failed to find container exit file for 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: timed out waiting for the condition" id=d7d2ff15-db4a-4746-9a27-9d0ad3988624 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:41.706706348Z" level=warning msg="Failed to find container exit file for 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: timed out waiting for the condition" id=10e29271-e99e-484d-9276-00c38b558719 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:41.824558 2199 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18fdfb87-d654-42fc-9d35-d7db9b65ab35-host\") pod \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\" (UID: \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\") " Feb 23 19:28:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:41.824626 2199 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbkl5\" (UniqueName: \"kubernetes.io/projected/18fdfb87-d654-42fc-9d35-d7db9b65ab35-kube-api-access-qbkl5\") pod \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\" (UID: \"18fdfb87-d654-42fc-9d35-d7db9b65ab35\") " Feb 23 19:28:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:41.824647 2199 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18fdfb87-d654-42fc-9d35-d7db9b65ab35-host" (OuterVolumeSpecName: "host") pod "18fdfb87-d654-42fc-9d35-d7db9b65ab35" (UID: "18fdfb87-d654-42fc-9d35-d7db9b65ab35"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:28:41 ip-10-0-136-68 systemd[1]: var-lib-kubelet-pods-18fdfb87\x2dd654\x2d42fc\x2d9d35\x2dd7db9b65ab35-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqbkl5.mount: Deactivated successfully. Feb 23 19:28:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:41.836617 2199 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fdfb87-d654-42fc-9d35-d7db9b65ab35-kube-api-access-qbkl5" (OuterVolumeSpecName: "kube-api-access-qbkl5") pod "18fdfb87-d654-42fc-9d35-d7db9b65ab35" (UID: "18fdfb87-d654-42fc-9d35-d7db9b65ab35"). InnerVolumeSpecName "kube-api-access-qbkl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:28:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:41.925102 2199 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-qbkl5\" (UniqueName: \"kubernetes.io/projected/18fdfb87-d654-42fc-9d35-d7db9b65ab35-kube-api-access-qbkl5\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 19:28:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:41.925135 2199 reconciler_common.go:295] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18fdfb87-d654-42fc-9d35-d7db9b65ab35-host\") on node \"ip-10-0-136-68.us-west-2.compute.internal\" DevicePath \"\"" Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod18fdfb87_d654_42fc_9d35_d7db9b65ab35.slice. Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.245169950Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.245227585Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: run-utsns-c0f30fbb\x2daaae\x2d4ef1\x2d8c58\x2da7edefa7e7cb.mount: Deactivated successfully. Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: run-ipcns-c0f30fbb\x2daaae\x2d4ef1\x2d8c58\x2da7edefa7e7cb.mount: Deactivated successfully. Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: run-netns-c0f30fbb\x2daaae\x2d4ef1\x2d8c58\x2da7edefa7e7cb.mount: Deactivated successfully. Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.259326561Z" level=info msg="runSandbox: deleting pod ID 92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b from idIndex" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.259358685Z" level=info msg="runSandbox: removing pod sandbox 92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.259385561Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.259398983Z" level=info msg="runSandbox: unmounting shmPath for sandbox 92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b-userdata-shm.mount: Deactivated successfully. Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.266308095Z" level=info msg="runSandbox: removing pod sandbox from storage: 92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.267772554Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.267800542Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d67190a3-6151-4d87-a778-502d7d075927 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:42.267979 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:28:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:42.268030 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:28:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:42.268053 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:28:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:42.268106 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92b85440c01ec85cd18ea66bb9c1920e63f2169cb0213f7a1c9281362216915b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.432910189Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=41d4371b-4965-4be1-9e7c-a544eea9bb37 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.435011201Z" level=info msg="Stopped container a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=41d4371b-4965-4be1-9e7c-a544eea9bb37 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.435722131Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=8980c7da-b746-4d45-92e2-a03abc184c77 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.435894386Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=8980c7da-b746-4d45-92e2-a03abc184c77 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.436463892Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=212ef9f9-65a1-41de-a70d-cc31473be322 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.436621082Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=212ef9f9-65a1-41de-a70d-cc31473be322 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.437285598Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d7c4b9b4-b7bd-46ec-85d0-d29b1664b3f3 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.437392560Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: Started crio-conmon-98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba.scope. Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: Started libcontainer container 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba. Feb 23 19:28:42 ip-10-0-136-68 conmon[16408]: conmon 98f90f5c8a2351bc1961 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: crio-conmon-98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba.scope: Deactivated successfully. Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.554986800Z" level=info msg="Created container 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d7c4b9b4-b7bd-46ec-85d0-d29b1664b3f3 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.555384013Z" level=info msg="Starting container: 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" id=5a6c7459-0b37-4868-b36c-3bdaacf2c1f1 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.563005377Z" level=info msg="Started container" PID=16420 containerID=98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=5a6c7459-0b37-4868-b36c-3bdaacf2c1f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:28:42 ip-10-0-136-68 systemd[1]: crio-98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba.scope: Deactivated successfully. Feb 23 19:28:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:42.657499253Z" level=warning msg="Failed to find container exit file for 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: timed out waiting for the condition" id=c7cfa1f3-d1d7-47ce-9701-a32212afcb04 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:42.663749 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug" event=&{ID:18fdfb87-d654-42fc-9d35-d7db9b65ab35 Type:ContainerDied Data:ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b} Feb 23 19:28:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:42.663780 2199 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b" Feb 23 19:28:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:44.536879936Z" level=warning msg="Failed to find container exit file for 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: timed out waiting for the condition" id=23ac597a-1034-4cfd-88f6-236f226d7f24 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:44.640312 2199 kubelet.go:2235] "SyncLoop DELETE" source="api" pods="[openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug]" Feb 23 19:28:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:44.644461 2199 kubelet.go:2229] "SyncLoop REMOVE" source="api" pods="[openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug]" Feb 23 19:28:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:46.218975 2199 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=18fdfb87-d654-42fc-9d35-d7db9b65ab35 path="/var/lib/kubelet/pods/18fdfb87-d654-42fc-9d35-d7db9b65ab35/volumes" Feb 23 19:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:46.236530081Z" level=info msg="NetworkStart: stopping network for sandbox 89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:46.236649810Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/92d72f5f-11c8-481f-ac99-b75b0815ebe7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:46.236678366Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:46.236687628Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:46.236698030Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:28:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:46.413162847Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=07378eb1-8f61-448a-aff3-1a34173454cd name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:50.174977205Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=99dd3c92-9e94-413a-aa28-75366ceb7f50 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:50.175943 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330" exitCode=-1 Feb 23 19:28:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:50.175983 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330} Feb 23 19:28:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:50.176011 2199 scope.go:115] "RemoveContainer" containerID="bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" Feb 23 19:28:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:53.936055237Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=5713f91b-8bee-4b51-8897-7ac4bdd503cb name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:54.873045 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:28:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:54.873108 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:28:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:54.941000498Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=a6370f48-253f-48f4-a860-a9ae1942a00b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:55.217407 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:55.217783 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:55.218055 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:55.218090 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:28:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:56.292239 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:56.292538 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:56.292804 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:28:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:28:56.292834 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:28:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:57.686068381Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=6d53046b-91a7-440b-9caf-449a8f1152b0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:57.686688161Z" level=info msg="Removing container: bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050" id=083600a1-3085-48bc-b8ed-896b5218756a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:28:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:58.691346905Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=98798379-0efa-45e6-8f1d-7f8fc7a7d69d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:28:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:58.692326 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba} Feb 23 19:28:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:28:58.692804 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:28:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:58.693074176Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:28:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:58.693135487Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:28:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:58.699504974Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/7f1f25d2-e85e-49db-b4fc-b65e2ea58a5d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:28:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:28:58.699542634Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:29:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:01.234742686Z" level=info msg="NetworkStart: stopping network for sandbox 3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:01.234865058Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/cb23f9b4-80a3-4924-94ec-c2e4a130616e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:29:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:01.234892613Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:29:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:01.234900604Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:29:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:01.234909437Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:29:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:01.449097090Z" level=warning msg="Failed to find container exit file for bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: timed out waiting for the condition" id=083600a1-3085-48bc-b8ed-896b5218756a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:29:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:01.462674314Z" level=info msg="Removed container bc27041f5e41e1ab35ed5a792e05191cdd3af24058593adb65fe5d39351c9050: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=083600a1-3085-48bc-b8ed-896b5218756a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:29:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:04.872947 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:29:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:04.873011 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:29:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:05.446081552Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=9dcb3eee-4d12-40be-9c14-090af9afd3e5 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:29:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:09.235199189Z" level=info msg="NetworkStart: stopping network for sandbox 2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:09.235354282Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/cbd25aae-e54d-403d-838c-23225952370f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:29:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:09.235383949Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:29:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:09.235394602Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:29:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:09.235403030Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:29:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:14.237449543Z" level=info msg="NetworkStart: stopping network for sandbox 12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:14.237576570Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/ba37bef1-7822-4f72-9b12-724846fff582 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:29:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:14.237613333Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:29:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:14.237622704Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:29:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:14.237631853Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:29:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:14.871998 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:29:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:14.872056 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:29:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:24.872695 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:29:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:24.872756 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:29:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:26.292309 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:29:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:26.292644 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:29:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:26.292881 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:29:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:26.292927 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.246092607Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.246137600Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 systemd[1]: run-utsns-92d72f5f\x2d11c8\x2d481f\x2dac99\x2db75b0815ebe7.mount: Deactivated successfully. Feb 23 19:29:31 ip-10-0-136-68 systemd[1]: run-ipcns-92d72f5f\x2d11c8\x2d481f\x2dac99\x2db75b0815ebe7.mount: Deactivated successfully. Feb 23 19:29:31 ip-10-0-136-68 systemd[1]: run-netns-92d72f5f\x2d11c8\x2d481f\x2dac99\x2db75b0815ebe7.mount: Deactivated successfully. Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.272345278Z" level=info msg="runSandbox: deleting pod ID 89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630 from idIndex" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.272389054Z" level=info msg="runSandbox: removing pod sandbox 89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.272435578Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.272449701Z" level=info msg="runSandbox: unmounting shmPath for sandbox 89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630-userdata-shm.mount: Deactivated successfully. Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.279306012Z" level=info msg="runSandbox: removing pod sandbox from storage: 89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.280945164Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:31.280979211Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=d6d5801f-2b64-4c8c-8067-ebddadbd58e4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:31.281197 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:29:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:31.281295 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:29:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:31.281324 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:29:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:31.281398 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(89b8d13a5199555c5a8195f712b44e3fd765d92d607920e8afe4d902aeeec630): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:29:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:34.872994 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:29:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:34.873059 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:29:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:34.873088 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:29:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:34.873654 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:29:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:34.873814 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" gracePeriod=30 Feb 23 19:29:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:34.874063120Z" level=info msg="Stopping container: 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba (timeout: 30s)" id=58587a57-0749-430a-80db-cea8d03e8e71 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:29:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:38.636105117Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=58587a57-0749-430a-80db-cea8d03e8e71 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:29:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-a3271c8876fc3d2b7b02b6da1763ffddbee7d8edadefab8a23bed0391bee1da3-merged.mount: Deactivated successfully. Feb 23 19:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:42.416898173Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=58587a57-0749-430a-80db-cea8d03e8e71 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:29:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:42.418846558Z" level=info msg="Stopped container 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=58587a57-0749-430a-80db-cea8d03e8e71 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:29:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:42.419395 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:43.253977182Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=5dc5c564-b5c1-4eb2-b391-e054169f53ff name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:43.710712816Z" level=info msg="NetworkStart: stopping network for sandbox 35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:43.710836703Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/7f1f25d2-e85e-49db-b4fc-b65e2ea58a5d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:43.710865158Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:43.710874918Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:29:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:43.710885308Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:29:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:44.217282 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:29:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:44.217691324Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:44.217744492Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:29:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:44.223389095Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ac9239a8-ea07-4faf-8b7a-4a76dec15e12 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:29:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:44.223414988Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.244799646Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.244848720Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 systemd[1]: run-utsns-cb23f9b4\x2d80a3\x2d4924\x2d94ec\x2dc2e4a130616e.mount: Deactivated successfully. Feb 23 19:29:46 ip-10-0-136-68 systemd[1]: run-ipcns-cb23f9b4\x2d80a3\x2d4924\x2d94ec\x2dc2e4a130616e.mount: Deactivated successfully. Feb 23 19:29:46 ip-10-0-136-68 systemd[1]: run-netns-cb23f9b4\x2d80a3\x2d4924\x2d94ec\x2dc2e4a130616e.mount: Deactivated successfully. Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.266316975Z" level=info msg="runSandbox: deleting pod ID 3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e from idIndex" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.266351873Z" level=info msg="runSandbox: removing pod sandbox 3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.266381217Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.266393803Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e-userdata-shm.mount: Deactivated successfully. Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.285323518Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.287692483Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:46.287741510Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=62ac22e2-aab9-43e7-9f4c-c44eec41f5e8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:46.287933 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:29:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:46.288000 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:29:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:46.288042 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:29:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:46.288113 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3d90a96c3776d6d7653c1078bdb08a3d44ea915af83d26818608509e6fac8d6e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:29:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:47.002973814Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=cae5a5b6-ce42-47f4-8ab8-2b1ef1655e76 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:29:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:47.003921 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" exitCode=-1 Feb 23 19:29:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:47.003964 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba} Feb 23 19:29:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:47.003999 2199 scope.go:115] "RemoveContainer" containerID="a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330" Feb 23 19:29:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:48.006049 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:29:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:48.006471 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:29:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:50.751984642Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=60a77ea6-3ea8-46bc-80ec-f4fa8a31d4a5 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.245058592Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.245107266Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 systemd[1]: run-utsns-cbd25aae\x2de54d\x2d403d\x2d838c\x2d23225952370f.mount: Deactivated successfully. Feb 23 19:29:54 ip-10-0-136-68 systemd[1]: run-ipcns-cbd25aae\x2de54d\x2d403d\x2d838c\x2d23225952370f.mount: Deactivated successfully. Feb 23 19:29:54 ip-10-0-136-68 systemd[1]: run-netns-cbd25aae\x2de54d\x2d403d\x2d838c\x2d23225952370f.mount: Deactivated successfully. Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.277323462Z" level=info msg="runSandbox: deleting pod ID 2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2 from idIndex" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.277351311Z" level=info msg="runSandbox: removing pod sandbox 2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.277377430Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.277397346Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2-userdata-shm.mount: Deactivated successfully. Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.284312421Z" level=info msg="runSandbox: removing pod sandbox from storage: 2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.285796485Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.285823110Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e7b171f0-e460-4b0e-b116-379cb6b84baf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:54.286020 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:29:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:54.286070 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:29:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:54.286100 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:29:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:54.286152 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c7e0ce2b6984820bd50c91cc15700635834ea847b4be10afd05045dd2fc59f2): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.500984597Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=a3981d06-0e8c-4b18-8ef3-1e15432e4d88 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:29:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:54.501526181Z" level=info msg="Removing container: a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330" id=99a790f1-31df-409d-a248-85161be5e7d5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:29:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:56.292650 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:29:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:56.292912 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:29:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:56.293148 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:29:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:56.293176 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:29:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:58.262010911Z" level=warning msg="Failed to find container exit file for a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: timed out waiting for the condition" id=99a790f1-31df-409d-a248-85161be5e7d5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:29:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:58.286210888Z" level=info msg="Removed container a1f9b41ad75b0c66c06022c4e84849eb717e7842b78a89bd824512db3a8f4330: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=99a790f1-31df-409d-a248-85161be5e7d5 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:29:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:29:59.216700 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.217103440Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.217153616Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.222572871Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/a19de8a7-3f97-439d-ae80-5861252763e7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.222609326Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.247889761Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.247947519Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 systemd[1]: run-utsns-ba37bef1\x2d7822\x2d4f72\x2d9b12\x2d724846fff582.mount: Deactivated successfully. Feb 23 19:29:59 ip-10-0-136-68 systemd[1]: run-ipcns-ba37bef1\x2d7822\x2d4f72\x2d9b12\x2d724846fff582.mount: Deactivated successfully. Feb 23 19:29:59 ip-10-0-136-68 systemd[1]: run-netns-ba37bef1\x2d7822\x2d4f72\x2d9b12\x2d724846fff582.mount: Deactivated successfully. Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.276343460Z" level=info msg="runSandbox: deleting pod ID 12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3 from idIndex" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.276380946Z" level=info msg="runSandbox: removing pod sandbox 12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.276427757Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.276442394Z" level=info msg="runSandbox: unmounting shmPath for sandbox 12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.282313854Z" level=info msg="runSandbox: removing pod sandbox from storage: 12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.283835138Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:29:59.283864396Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=0df6e076-7990-4d09-99a3-3b043f3d1166 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:29:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:59.284063 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:29:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:59.284117 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:29:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:59.284147 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:29:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:29:59.284201 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:30:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-12fca5f1c26b85885eefe53cb51e19d9e230b3339432b548f77a384e4ab04bb3-userdata-shm.mount: Deactivated successfully. Feb 23 19:30:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:30:02.217461 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:30:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:02.217888 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:30:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:02.774156671Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=1989b669-f14a-4697-957b-dcb6a451b243 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:30:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:30:06.217176 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:30:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:06.217695315Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:06.217757690Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:30:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:06.224170892Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7d4d0874-2bf4-4337-8f92-52c4a0997804 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:30:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:06.224206866Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:30:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:30:10.217427 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:30:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:10.217885935Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:10.217971537Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:30:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:10.223608572Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/9895c4f8-fea2-411d-bf7e-9ad8d2c0392d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:30:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:10.223633229Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:30:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:12.217537 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:12.217898 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:12.218228 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:12.218293 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:30:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:30:17.216543 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:30:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:17.217074 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:30:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:20.230474833Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=7d8756e7-af2e-40d5-a4e7-ad0a19a13390 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:30:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:20.230668663Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=7d8756e7-af2e-40d5-a4e7-ad0a19a13390 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:30:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:26.292594 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:26.292899 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:26.293152 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:26.293183 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:30:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:30:28.217594 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:30:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:28.218216 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.721053826Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.721101698Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 systemd[1]: run-utsns-7f1f25d2\x2de85e\x2d49db\x2db4fc\x2db65e2ea58a5d.mount: Deactivated successfully. Feb 23 19:30:28 ip-10-0-136-68 systemd[1]: run-ipcns-7f1f25d2\x2de85e\x2d49db\x2db4fc\x2db65e2ea58a5d.mount: Deactivated successfully. Feb 23 19:30:28 ip-10-0-136-68 systemd[1]: run-netns-7f1f25d2\x2de85e\x2d49db\x2db4fc\x2db65e2ea58a5d.mount: Deactivated successfully. Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.736329875Z" level=info msg="runSandbox: deleting pod ID 35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98 from idIndex" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.736368954Z" level=info msg="runSandbox: removing pod sandbox 35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.736398435Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.736411385Z" level=info msg="runSandbox: unmounting shmPath for sandbox 35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98-userdata-shm.mount: Deactivated successfully. Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.742302778Z" level=info msg="runSandbox: removing pod sandbox from storage: 35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.743870998Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:28.743906832Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=4376696c-d1de-4c47-8452-dd52b3b7e643 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:28.744138 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:30:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:28.744203 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:30:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:28.744259 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:30:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:28.744346 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(35bbd581c17d52c49640c008ba16bf762dbc651212e9ead26fb1364b061ebf98): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:30:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:29.236691443Z" level=info msg="NetworkStart: stopping network for sandbox e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:29.236821545Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ac9239a8-ea07-4faf-8b7a-4a76dec15e12 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:30:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:29.236848326Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:30:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:29.236856164Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:30:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:29.236863302Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:30:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:30:40.217224 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:30:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:40.217668726Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:40.217732563Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:30:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:40.226137448Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/654a735a-f9a3-464b-a9c6-f141d03673f9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:30:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:40.226172424Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:30:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:30:42.217423 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:30:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:42.218001 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:30:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:44.234322267Z" level=info msg="NetworkStart: stopping network for sandbox f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:44.234498271Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/a19de8a7-3f97-439d-ae80-5861252763e7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:30:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:44.234526521Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:30:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:44.234537131Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:30:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:44.234546265Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:30:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:51.235479470Z" level=info msg="NetworkStart: stopping network for sandbox 6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:51.235596816Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7d4d0874-2bf4-4337-8f92-52c4a0997804 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:30:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:51.235623969Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:30:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:51.235630991Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:30:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:51.235637876Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:30:53 ip-10-0-136-68 sshd[16859]: main: sshd: ssh-rsa algorithm is disabled Feb 23 19:30:54 ip-10-0-136-68 sshd[16859]: Accepted publickey for core from 10.0.182.221 port 59462 ssh2: RSA SHA256:Ez+JFROVIkSQ/eAziisgy16VY49IFSr8A84gQk7WcPc Feb 23 19:30:54 ip-10-0-136-68 systemd[1]: Created slice User Slice of UID 1000. Feb 23 19:30:54 ip-10-0-136-68 systemd[1]: Starting User Runtime Directory /run/user/1000... Feb 23 19:30:54 ip-10-0-136-68 systemd-logind[985]: New session 3 of user core. Feb 23 19:30:54 ip-10-0-136-68 systemd[1]: Finished User Runtime Directory /run/user/1000. Feb 23 19:30:54 ip-10-0-136-68 systemd[1]: Starting User Manager for UID 1000... Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: pam_unix(systemd-user:session): session opened for user core(uid=1000) by (uid=0) Feb 23 19:30:54 ip-10-0-136-68 systemd[16871]: /usr/lib/systemd/user-generators/podman-user-generator failed with exit status 1. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Queued start job for default target Main User Target. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Created slice User Application Slice. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Started Daily Cleanup of User's Temporary Directories. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Reached target Paths. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Reached target Timers. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Starting D-Bus User Message Bus Socket... Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Starting Create User's Volatile Files and Directories... Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Listening on D-Bus User Message Bus Socket. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Reached target Sockets. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Finished Create User's Volatile Files and Directories. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Reached target Basic System. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Reached target Main User Target. Feb 23 19:30:54 ip-10-0-136-68 systemd[16865]: Startup finished in 104ms. Feb 23 19:30:54 ip-10-0-136-68 systemd[1]: Started User Manager for UID 1000. Feb 23 19:30:54 ip-10-0-136-68 systemd[1]: Started Session 3 of User core. Feb 23 19:30:54 ip-10-0-136-68 sshd[16859]: pam_unix(sshd:session): session opened for user core(uid=1000) by (uid=0) Feb 23 19:30:54 ip-10-0-136-68 sudo[16884]: core : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/bash Feb 23 19:30:54 ip-10-0-136-68 sudo[16884]: pam_unix(sudo-i:session): session opened for user root(uid=0) by core(uid=1000) Feb 23 19:30:54 ip-10-0-136-68 systemd[1]: Starting Hostname Service... Feb 23 19:30:54 ip-10-0-136-68 systemd[1]: Started Hostname Service. Feb 23 19:30:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:55.236204817Z" level=info msg="NetworkStart: stopping network for sandbox 3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:30:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:55.236748662Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/9895c4f8-fea2-411d-bf7e-9ad8d2c0392d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:30:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:55.236783648Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:30:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:55.236794752Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:30:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:30:55.236802002Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:30:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:56.292699 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:56.293266 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:56.293523 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:30:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:56.293552 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:30:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:30:57.217091 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:30:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:30:57.222017 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:31:05 ip-10-0-136-68 sudo[16884]: pam_unix(sudo-i:session): session closed for user root Feb 23 19:31:05 ip-10-0-136-68 sshd[16883]: Received disconnect from 10.0.182.221 port 59462:11: disconnected by user Feb 23 19:31:05 ip-10-0-136-68 sshd[16883]: Disconnected from user core 10.0.182.221 port 59462 Feb 23 19:31:05 ip-10-0-136-68 sshd[16859]: pam_unix(sshd:session): session closed for user core Feb 23 19:31:05 ip-10-0-136-68 systemd-logind[985]: Session 3 logged out. Waiting for processes to exit. Feb 23 19:31:05 ip-10-0-136-68 systemd[1]: session-3.scope: Deactivated successfully. Feb 23 19:31:05 ip-10-0-136-68 systemd-logind[985]: Removed session 3. Feb 23 19:31:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:12.217426 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:31:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:12.218055 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.246477664Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.246689530Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 systemd[1]: run-utsns-ac9239a8\x2dea07\x2d4faf\x2d8b7a\x2d4a76dec15e12.mount: Deactivated successfully. Feb 23 19:31:14 ip-10-0-136-68 systemd[1]: run-ipcns-ac9239a8\x2dea07\x2d4faf\x2d8b7a\x2d4a76dec15e12.mount: Deactivated successfully. Feb 23 19:31:14 ip-10-0-136-68 systemd[1]: run-netns-ac9239a8\x2dea07\x2d4faf\x2d8b7a\x2d4a76dec15e12.mount: Deactivated successfully. Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.264370613Z" level=info msg="runSandbox: deleting pod ID e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361 from idIndex" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.264437859Z" level=info msg="runSandbox: removing pod sandbox e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.264469934Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.264487778Z" level=info msg="runSandbox: unmounting shmPath for sandbox e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361-userdata-shm.mount: Deactivated successfully. Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.270334793Z" level=info msg="runSandbox: removing pod sandbox from storage: e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.272329432Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:14.272359763Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=acfcbcb7-bef8-4c3b-9e0f-075e2c7a8e56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:14.272583 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:31:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:14.272641 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:31:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:14.272669 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:31:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:14.272732 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e1563f6494c79a4c43647e4f7af88e9e65a548715e90b0e0dda65bdbd6462361): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:31:15 ip-10-0-136-68 systemd[1]: Stopping User Manager for UID 1000... Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Activating special unit Exit the Session... Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Stopped target Main User Target. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Stopped target Basic System. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Stopped target Paths. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Stopped target Sockets. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Stopped target Timers. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Closed D-Bus User Message Bus Socket. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Stopped Create User's Volatile Files and Directories. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Removed slice User Application Slice. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Reached target Shutdown. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Finished Exit the Session. Feb 23 19:31:15 ip-10-0-136-68 systemd[16865]: Reached target Exit the Session. Feb 23 19:31:15 ip-10-0-136-68 systemd[1]: user@1000.service: Deactivated successfully. Feb 23 19:31:15 ip-10-0-136-68 systemd[1]: Stopped User Manager for UID 1000. Feb 23 19:31:15 ip-10-0-136-68 systemd[1]: Stopping User Runtime Directory /run/user/1000... Feb 23 19:31:15 ip-10-0-136-68 systemd[1]: run-user-1000.mount: Deactivated successfully. Feb 23 19:31:15 ip-10-0-136-68 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully. Feb 23 19:31:15 ip-10-0-136-68 systemd[1]: Stopped User Runtime Directory /run/user/1000. Feb 23 19:31:15 ip-10-0-136-68 systemd[1]: Removed slice User Slice of UID 1000. Feb 23 19:31:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:17.217231 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:17.217746 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:17.218027 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:17.218062 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:31:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:23.216608 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:31:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:23.217013 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:31:24 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 19:31:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:25.216942 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.217380514Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.217634813Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.224619745Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/bdaec758-1429-4eb8-b0bd-7c7ac0ce1990 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.224650347Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.237810608Z" level=info msg="NetworkStart: stopping network for sandbox 8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.237892852Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/654a735a-f9a3-464b-a9c6-f141d03673f9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.237917469Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.237924786Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:31:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:25.237931176Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:31:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:26.292606 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:26.292865 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:26.293092 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:26.293131 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.243898241Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.243947206Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 systemd[1]: run-utsns-a19de8a7\x2d3f97\x2d439d\x2dae80\x2d5861252763e7.mount: Deactivated successfully. Feb 23 19:31:29 ip-10-0-136-68 systemd[1]: run-ipcns-a19de8a7\x2d3f97\x2d439d\x2dae80\x2d5861252763e7.mount: Deactivated successfully. Feb 23 19:31:29 ip-10-0-136-68 systemd[1]: run-netns-a19de8a7\x2d3f97\x2d439d\x2dae80\x2d5861252763e7.mount: Deactivated successfully. Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.277331252Z" level=info msg="runSandbox: deleting pod ID f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241 from idIndex" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.277366437Z" level=info msg="runSandbox: removing pod sandbox f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.277413011Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.277430568Z" level=info msg="runSandbox: unmounting shmPath for sandbox f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241-userdata-shm.mount: Deactivated successfully. Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.282309028Z" level=info msg="runSandbox: removing pod sandbox from storage: f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.284123184Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:29.284157625Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=3ae341ad-2138-4de9-997b-33654aa39f6c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:29.284667 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:31:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:29.284718 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:31:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:29.284748 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:31:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:29.284806 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f1a914ac037eb97ce37cac860ccb7ea4dbb11c61c0244774df1a03006dcea241): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:31:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:36.216887 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:31:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:36.217342 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.245973830Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.246021158Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 systemd[1]: run-utsns-7d4d0874\x2d2bf4\x2d4337\x2d8f92\x2d52c4a0997804.mount: Deactivated successfully. Feb 23 19:31:36 ip-10-0-136-68 systemd[1]: run-ipcns-7d4d0874\x2d2bf4\x2d4337\x2d8f92\x2d52c4a0997804.mount: Deactivated successfully. Feb 23 19:31:36 ip-10-0-136-68 systemd[1]: run-netns-7d4d0874\x2d2bf4\x2d4337\x2d8f92\x2d52c4a0997804.mount: Deactivated successfully. Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.280344471Z" level=info msg="runSandbox: deleting pod ID 6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381 from idIndex" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.280389895Z" level=info msg="runSandbox: removing pod sandbox 6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.280438284Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.280452846Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381-userdata-shm.mount: Deactivated successfully. Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.285309352Z" level=info msg="runSandbox: removing pod sandbox from storage: 6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.286963850Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:36.286993958Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=803001bc-8187-4ec7-9e23-9e37349f75c7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:36.287205 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:31:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:36.287292 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:31:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:36.287319 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:31:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:36.287377 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6feeafb97b30e227c26f096601b78fdb465bd958516a6cca26806a8d8c031381): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:31:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:39.004924 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2fx68_ff7777c7-a1dc-413e-8da1-c4ba07527037/machine-config-daemon/1.log" Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.247504376Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.247552399Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 systemd[1]: run-utsns-9895c4f8\x2dfea2\x2d411d\x2dbf7e\x2d9ad8d2c0392d.mount: Deactivated successfully. Feb 23 19:31:40 ip-10-0-136-68 systemd[1]: run-ipcns-9895c4f8\x2dfea2\x2d411d\x2dbf7e\x2d9ad8d2c0392d.mount: Deactivated successfully. Feb 23 19:31:40 ip-10-0-136-68 systemd[1]: run-netns-9895c4f8\x2dfea2\x2d411d\x2dbf7e\x2d9ad8d2c0392d.mount: Deactivated successfully. Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.273343094Z" level=info msg="runSandbox: deleting pod ID 3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78 from idIndex" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.273389283Z" level=info msg="runSandbox: removing pod sandbox 3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.273433640Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.273450223Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78-userdata-shm.mount: Deactivated successfully. Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.281317097Z" level=info msg="runSandbox: removing pod sandbox from storage: 3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.282876304Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:40.282907101Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=2d44cd2b-021c-482d-9e29-0d35988126d8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:40.283132 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:31:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:40.283204 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:31:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:40.283266 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:31:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:40.283367 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(3ed0d241329366f7b1311a035c83cf9f6f8b9768fba914a00fd27225f04a5f78): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:31:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:43.217179 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:31:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:43.217605890Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:43.217660848Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:31:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:43.223113811Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/89f84ee2-00bc-4e7a-9469-322c8c7309df Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:31:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:43.223139926Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:31:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:48.216841 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:31:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:48.217301363Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:48.217366449Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:31:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:48.223405165Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/e5933ec1-8899-4660-92b1-320b0a53b031 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:31:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:48.223441622Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:31:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:49.216768 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:31:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:49.217274 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:31:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:31:53.217111 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:31:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:53.217547601Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:31:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:53.217608871Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:31:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:53.223336051Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/38ecfb5d-a390-4218-bc22-8447ab2b88e4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:31:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:31:53.223373739Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:31:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:56.292352 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:56.292652 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:56.292906 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:31:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:31:56.292949 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:32:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:32:02.217268 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:32:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:02.217832 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.236430723Z" level=info msg="NetworkStart: stopping network for sandbox 9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.236545721Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/bdaec758-1429-4eb8-b0bd-7c7ac0ce1990 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.236583029Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.236596865Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.236606463Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.247071337Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.247115162Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 systemd[1]: run-utsns-654a735a\x2df9a3\x2d464b\x2da9c6\x2df141d03673f9.mount: Deactivated successfully. Feb 23 19:32:10 ip-10-0-136-68 systemd[1]: run-ipcns-654a735a\x2df9a3\x2d464b\x2da9c6\x2df141d03673f9.mount: Deactivated successfully. Feb 23 19:32:10 ip-10-0-136-68 systemd[1]: run-netns-654a735a\x2df9a3\x2d464b\x2da9c6\x2df141d03673f9.mount: Deactivated successfully. Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.276326076Z" level=info msg="runSandbox: deleting pod ID 8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742 from idIndex" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.276358778Z" level=info msg="runSandbox: removing pod sandbox 8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.276386838Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.276400913Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742-userdata-shm.mount: Deactivated successfully. Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.281295939Z" level=info msg="runSandbox: removing pod sandbox from storage: 8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.282916328Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:10.282944101Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=61c40e14-ea10-45f9-96a4-c7ed14c4196d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:10.283145 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:32:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:10.283200 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:32:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:10.283224 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:32:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:10.283306 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8af19f146359f9b282d6f67cc1e5da4e1e35e857e685430ae433f81f7a47e742): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:32:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:32:13.216407 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:32:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:13.216970 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:32:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:32:22.217546 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:32:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:22.217993238Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:22.218066183Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:32:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:22.223863600Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/963c8eea-7d70-4556-b563-efa97d594183 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:32:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:22.223890344Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:26.217163 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:26.217680 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:26.217915 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:26.217956 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:26.291766 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:26.291972 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:26.292166 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:26.292193 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:32:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:32:27.217215 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:32:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:27.217770 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:32:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:28.234741902Z" level=info msg="NetworkStart: stopping network for sandbox e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:28.234858722Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/89f84ee2-00bc-4e7a-9469-322c8c7309df Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:32:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:28.234894814Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:32:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:28.234906196Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:32:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:28.234916752Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:32:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:33.237009558Z" level=info msg="NetworkStart: stopping network for sandbox 2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:33.237141659Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/e5933ec1-8899-4660-92b1-320b0a53b031 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:32:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:33.237168900Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:32:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:33.237176447Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:32:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:33.237185000Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:32:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:38.235492923Z" level=info msg="NetworkStart: stopping network for sandbox 9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:38.235612488Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/38ecfb5d-a390-4218-bc22-8447ab2b88e4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:32:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:38.235651237Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:32:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:38.235662699Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:32:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:38.235673115Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:32:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:32:40.217583 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:32:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:40.217956 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:32:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:32:54.216585 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:32:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:54.217126 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.245724582Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.245778956Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 systemd[1]: run-utsns-bdaec758\x2d1429\x2d4eb8\x2db0bd\x2d7c7ac0ce1990.mount: Deactivated successfully. Feb 23 19:32:55 ip-10-0-136-68 systemd[1]: run-ipcns-bdaec758\x2d1429\x2d4eb8\x2db0bd\x2d7c7ac0ce1990.mount: Deactivated successfully. Feb 23 19:32:55 ip-10-0-136-68 systemd[1]: run-netns-bdaec758\x2d1429\x2d4eb8\x2db0bd\x2d7c7ac0ce1990.mount: Deactivated successfully. Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.264324472Z" level=info msg="runSandbox: deleting pod ID 9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448 from idIndex" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.264367557Z" level=info msg="runSandbox: removing pod sandbox 9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.264412283Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.264432250Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448-userdata-shm.mount: Deactivated successfully. Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.269317606Z" level=info msg="runSandbox: removing pod sandbox from storage: 9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.270852643Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:32:55.270882689Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=f18bea3f-50d3-4440-a4a4-62adcfb2d337 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:32:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:55.271092 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:32:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:55.271160 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:32:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:55.271200 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:32:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:55.271306 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(9df14047236af9c1292b984a3673cadd2d163967017ee9c0328bfb3ed64d7448): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:32:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:56.291928 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:56.292287 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:56.292521 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:32:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:32:56.292553 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:33:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:06.217009 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:33:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:06.217593 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:33:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:07.217367 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.217770678Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.217824691Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.223128955Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/42519276-57d2-4a91-a445-4a80255368c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.223156916Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.235950707Z" level=info msg="NetworkStart: stopping network for sandbox 5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.236046584Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/963c8eea-7d70-4556-b563-efa97d594183 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.236082579Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.236094187Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:33:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:07.236103880Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.244032881Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.244083585Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 systemd[1]: run-utsns-89f84ee2\x2d00bc\x2d4e7a\x2d9469\x2d322c8c7309df.mount: Deactivated successfully. Feb 23 19:33:13 ip-10-0-136-68 systemd[1]: run-ipcns-89f84ee2\x2d00bc\x2d4e7a\x2d9469\x2d322c8c7309df.mount: Deactivated successfully. Feb 23 19:33:13 ip-10-0-136-68 systemd[1]: run-netns-89f84ee2\x2d00bc\x2d4e7a\x2d9469\x2d322c8c7309df.mount: Deactivated successfully. Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.261325826Z" level=info msg="runSandbox: deleting pod ID e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a from idIndex" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.261369342Z" level=info msg="runSandbox: removing pod sandbox e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.261411522Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.261437088Z" level=info msg="runSandbox: unmounting shmPath for sandbox e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a-userdata-shm.mount: Deactivated successfully. Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.271337816Z" level=info msg="runSandbox: removing pod sandbox from storage: e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.272877929Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:13.272907354Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=8f93136b-5fe3-4ab5-a2f6-c673c09fa82e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:13.273090 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:33:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:13.273160 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:33:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:13.273199 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:33:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:13.273333 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e39942c5736cb5945e35ddf75431ac1a5839dd31b81687b3e407dade36e4ed9a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.247237037Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.247308796Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 systemd[1]: run-utsns-e5933ec1\x2d8899\x2d4660\x2d92b1\x2d320b0a53b031.mount: Deactivated successfully. Feb 23 19:33:18 ip-10-0-136-68 systemd[1]: run-ipcns-e5933ec1\x2d8899\x2d4660\x2d92b1\x2d320b0a53b031.mount: Deactivated successfully. Feb 23 19:33:18 ip-10-0-136-68 systemd[1]: run-netns-e5933ec1\x2d8899\x2d4660\x2d92b1\x2d320b0a53b031.mount: Deactivated successfully. Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.268332933Z" level=info msg="runSandbox: deleting pod ID 2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac from idIndex" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.268372414Z" level=info msg="runSandbox: removing pod sandbox 2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.268409714Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.268432114Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac-userdata-shm.mount: Deactivated successfully. Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.282304296Z" level=info msg="runSandbox: removing pod sandbox from storage: 2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.283896506Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:18.283927551Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=d0ae5142-83fd-41ff-bf13-1c05fb2a6990 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:18.284151 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:33:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:18.284213 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:33:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:18.284267 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:33:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:18.284343 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(2c33a3dfa07fa4800c7620c129552735540b54e510ad5e083d191006aaa346ac): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:33:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:19.217116 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:33:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:19.217599 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.245621391Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.245673060Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 systemd[1]: run-utsns-38ecfb5d\x2da390\x2d4218\x2dbc22\x2d8447ab2b88e4.mount: Deactivated successfully. Feb 23 19:33:23 ip-10-0-136-68 systemd[1]: run-ipcns-38ecfb5d\x2da390\x2d4218\x2dbc22\x2d8447ab2b88e4.mount: Deactivated successfully. Feb 23 19:33:23 ip-10-0-136-68 systemd[1]: run-netns-38ecfb5d\x2da390\x2d4218\x2dbc22\x2d8447ab2b88e4.mount: Deactivated successfully. Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.277338350Z" level=info msg="runSandbox: deleting pod ID 9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b from idIndex" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.277383061Z" level=info msg="runSandbox: removing pod sandbox 9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.277434649Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.277458739Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b-userdata-shm.mount: Deactivated successfully. Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.282311964Z" level=info msg="runSandbox: removing pod sandbox from storage: 9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.283871429Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:23.283901566Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=c057459a-9510-46d1-9383-c584719a2392 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:23.284115 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:33:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:23.284170 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:33:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:23.284196 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:33:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:23.284285 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9351881861b7d62f848eb20b6f58f2924c6d22c88a95373459a66aa32475fd3b): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:33:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:26.292295 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:26.292618 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:26.292847 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:26.292888 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:33:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:27.216736 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:33:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:27.217081798Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:27.217147569Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:33:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:27.222712660Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/dff4c76b-6a43-456f-865d-6123a6830a83 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:33:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:27.222750935Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:33:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:29.217121 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:33:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:29.217468750Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:29.217520643Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:33:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:29.222779724Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/9bb3fd93-75bd-4351-8e97-59d68fa84c15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:33:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:29.222807701Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:33:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:33.216633 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:33:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:33.219000 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:33:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:35.217292 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:33:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:35.217719997Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:35.217785067Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:33:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:35.223595056Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/aadd9c49-09aa-4511-ab15-7162220cd0af Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:33:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:35.223632081Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:33:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:36.217208 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:36.217787 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:36.218022 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:36.218057 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:33:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:45.216903 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:33:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:45.218373 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.234780082Z" level=info msg="NetworkStart: stopping network for sandbox 6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.234887650Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/42519276-57d2-4a91-a445-4a80255368c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.234918910Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.234926499Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.234933839Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.244894556Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.244933460Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 systemd[1]: run-utsns-963c8eea\x2d7d70\x2d4556\x2db563\x2defa97d594183.mount: Deactivated successfully. Feb 23 19:33:52 ip-10-0-136-68 systemd[1]: run-ipcns-963c8eea\x2d7d70\x2d4556\x2db563\x2defa97d594183.mount: Deactivated successfully. Feb 23 19:33:52 ip-10-0-136-68 systemd[1]: run-netns-963c8eea\x2d7d70\x2d4556\x2db563\x2defa97d594183.mount: Deactivated successfully. Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.260331993Z" level=info msg="runSandbox: deleting pod ID 5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522 from idIndex" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.260373129Z" level=info msg="runSandbox: removing pod sandbox 5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.260416964Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.260436357Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522-userdata-shm.mount: Deactivated successfully. Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.265300738Z" level=info msg="runSandbox: removing pod sandbox from storage: 5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.266793328Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:33:52.266824566Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=6b39710e-a08e-4649-b5d3-32fd46f4da27 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:33:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:52.267026 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:33:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:52.267079 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:33:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:52.267116 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:33:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:52.267169 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c3426b25e1db77daf4c0b8f0b66dd36fb2883676771fcc78f72258882513522): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:33:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:56.292350 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:56.292584 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:56.292797 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:33:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:56.292821 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:33:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:58.217340 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:33:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:33:58.217885 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:33:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:33:59.361705 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2fx68_ff7777c7-a1dc-413e-8da1-c4ba07527037/machine-config-daemon/1.log" Feb 23 19:34:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:34:05.216625 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:05.217037844Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:05.217095819Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:05.222486844Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/72239026-7e9a-446d-a8c9-fe5026d19f58 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:34:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:05.222511239Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:34:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:12.233953018Z" level=info msg="NetworkStart: stopping network for sandbox 2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:12.234072469Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/dff4c76b-6a43-456f-865d-6123a6830a83 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:34:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:12.234108929Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:34:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:12.234119819Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:34:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:12.234132103Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:34:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:34:13.216967 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:34:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:13.217389 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:34:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:14.234907156Z" level=info msg="NetworkStart: stopping network for sandbox a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:14.235021538Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/9bb3fd93-75bd-4351-8e97-59d68fa84c15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:34:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:14.235060378Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:34:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:14.235072107Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:34:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:14.235084251Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:34:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:20.237720301Z" level=info msg="NetworkStart: stopping network for sandbox fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:20.237831126Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/aadd9c49-09aa-4511-ab15-7162220cd0af Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:34:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:20.237858411Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:34:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:20.237865420Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:34:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:20.237873776Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:34:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:26.292457 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:34:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:26.292723 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:34:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:26.292929 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:34:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:26.292954 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:34:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:34:28.216743 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:34:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:28.217387 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.244090139Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.244148807Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 systemd[1]: run-utsns-42519276\x2d57d2\x2d4a91\x2da445\x2d4a80255368c2.mount: Deactivated successfully. Feb 23 19:34:37 ip-10-0-136-68 systemd[1]: run-ipcns-42519276\x2d57d2\x2d4a91\x2da445\x2d4a80255368c2.mount: Deactivated successfully. Feb 23 19:34:37 ip-10-0-136-68 systemd[1]: run-netns-42519276\x2d57d2\x2d4a91\x2da445\x2d4a80255368c2.mount: Deactivated successfully. Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.271337408Z" level=info msg="runSandbox: deleting pod ID 6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764 from idIndex" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.271379122Z" level=info msg="runSandbox: removing pod sandbox 6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.271415120Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.271434246Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764-userdata-shm.mount: Deactivated successfully. Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.278312303Z" level=info msg="runSandbox: removing pod sandbox from storage: 6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.279834125Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:37.279864488Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=508f1ac4-d805-4c63-b335-64f3349a4533 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:37.280428 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:34:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:37.280533 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:34:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:37.280575 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:34:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:37.280650 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6ed95c869e5f9d8da7b89a6c3fc7526490d9c6dd76db2813c01903f613422764): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:34:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:34:43.217187 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.217869887Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=0a38247a-d336-461e-9da0-c658ff90dff6 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.218061336Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=0a38247a-d336-461e-9da0-c658ff90dff6 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.218722720Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=a77d6b66-cd1c-4b86-9c29-ea7961176151 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.218888511Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a77d6b66-cd1c-4b86-9c29-ea7961176151 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.219566429Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=ff516c32-8f6f-4283-818c-f3b010c91b8e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.219668264Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:34:43 ip-10-0-136-68 systemd[1]: Started crio-conmon-58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec.scope. Feb 23 19:34:43 ip-10-0-136-68 systemd[1]: Started libcontainer container 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec. Feb 23 19:34:43 ip-10-0-136-68 conmon[17206]: conmon 58a83a2ef3cfae2389bb : Failed to write to cgroup.event_control Operation not supported Feb 23 19:34:43 ip-10-0-136-68 systemd[1]: crio-conmon-58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec.scope: Deactivated successfully. Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.363542623Z" level=info msg="Created container 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=ff516c32-8f6f-4283-818c-f3b010c91b8e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.364021417Z" level=info msg="Starting container: 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec" id=d8e62b3e-789c-423e-87e2-b218aea6b102 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:34:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:43.383165316Z" level=info msg="Started container" PID=17218 containerID=58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=d8e62b3e-789c-423e-87e2-b218aea6b102 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:34:43 ip-10-0-136-68 systemd[1]: crio-58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec.scope: Deactivated successfully. Feb 23 19:34:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:48.029982997Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=f975fc37-b142-4162-bbfe-daf1cd9784f9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:34:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:34:48.030918 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec} Feb 23 19:34:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:34:48.217115 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:34:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:48.217548231Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:48.217627305Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:34:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:48.223431417Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/8a979ece-0a77-4fa2-a44f-e7ef5574f1ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:34:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:48.223467739Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:50.234412617Z" level=info msg="NetworkStart: stopping network for sandbox 25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:50.234523654Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/72239026-7e9a-446d-a8c9-fe5026d19f58 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:50.234554850Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:50.234566196Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:34:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:50.234574995Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:34:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:34:54.872520 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:34:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:34:54.872581 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:34:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:56.292037 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:34:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:56.292321 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:34:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:56.292502 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:34:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:56.292533 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.244628904Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.244688962Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 systemd[1]: run-utsns-dff4c76b\x2d6a43\x2d456f\x2d865d\x2d6123a6830a83.mount: Deactivated successfully. Feb 23 19:34:57 ip-10-0-136-68 systemd[1]: run-ipcns-dff4c76b\x2d6a43\x2d456f\x2d865d\x2d6123a6830a83.mount: Deactivated successfully. Feb 23 19:34:57 ip-10-0-136-68 systemd[1]: run-netns-dff4c76b\x2d6a43\x2d456f\x2d865d\x2d6123a6830a83.mount: Deactivated successfully. Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.262328584Z" level=info msg="runSandbox: deleting pod ID 2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204 from idIndex" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.262374035Z" level=info msg="runSandbox: removing pod sandbox 2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.262404962Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.262420388Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204-userdata-shm.mount: Deactivated successfully. Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.268311829Z" level=info msg="runSandbox: removing pod sandbox from storage: 2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.269903189Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:57.269931942Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=99496fa2-9fa3-498b-92bf-1590ad4f0f6a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:57.270132 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:34:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:57.270200 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:34:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:57.270235 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:34:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:57.270386 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2e75af2ccaaa777015a02214497afa9046a5460d88ed80244d53f2f90e8ad204): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.245438913Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.245491133Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 systemd[1]: run-utsns-9bb3fd93\x2d75bd\x2d4351\x2d8e97\x2d59d68fa84c15.mount: Deactivated successfully. Feb 23 19:34:59 ip-10-0-136-68 systemd[1]: run-ipcns-9bb3fd93\x2d75bd\x2d4351\x2d8e97\x2d59d68fa84c15.mount: Deactivated successfully. Feb 23 19:34:59 ip-10-0-136-68 systemd[1]: run-netns-9bb3fd93\x2d75bd\x2d4351\x2d8e97\x2d59d68fa84c15.mount: Deactivated successfully. Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.272351387Z" level=info msg="runSandbox: deleting pod ID a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03 from idIndex" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.272409957Z" level=info msg="runSandbox: removing pod sandbox a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.272448250Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.272466145Z" level=info msg="runSandbox: unmounting shmPath for sandbox a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03-userdata-shm.mount: Deactivated successfully. Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.287311191Z" level=info msg="runSandbox: removing pod sandbox from storage: a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.288882599Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:34:59.288910034Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=fcc3dd80-5320-4d06-a8a3-b60a091dbe1a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:34:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:59.289126 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:34:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:59.289175 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:34:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:59.289198 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:34:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:34:59.289271 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a7201f0b0f8c646105283a9f4678d2d1fbc39731caf52d788af2ed986e356a03): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:35:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:02.217125 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:02.217491 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:02.217769 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:02.217811 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:35:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:04.872030 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:35:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:04.872081 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.247155388Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.247392735Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 systemd[1]: run-utsns-aadd9c49\x2d09aa\x2d4511\x2dab15\x2d7162220cd0af.mount: Deactivated successfully. Feb 23 19:35:05 ip-10-0-136-68 systemd[1]: run-ipcns-aadd9c49\x2d09aa\x2d4511\x2dab15\x2d7162220cd0af.mount: Deactivated successfully. Feb 23 19:35:05 ip-10-0-136-68 systemd[1]: run-netns-aadd9c49\x2d09aa\x2d4511\x2dab15\x2d7162220cd0af.mount: Deactivated successfully. Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.282332551Z" level=info msg="runSandbox: deleting pod ID fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74 from idIndex" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.282366152Z" level=info msg="runSandbox: removing pod sandbox fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.282397033Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.282423376Z" level=info msg="runSandbox: unmounting shmPath for sandbox fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74-userdata-shm.mount: Deactivated successfully. Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.288303269Z" level=info msg="runSandbox: removing pod sandbox from storage: fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.289901713Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:05.289938085Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=5e67f6a0-7cbc-49ee-a74d-58330a57f1bb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:05.290160 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:35:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:05.290211 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:35:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:05.290265 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:35:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:05.290343 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(fffc63d418e69a578cd5e4194d9aa21be1c999de0ffc5c82a2a3ec2cde4dfb74): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:35:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:08.217397 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:35:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:08.217811967Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:08.217877580Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:35:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:08.226239458Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/3e1beed1-745a-4db0-aaa5-cafbbee38e99 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:35:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:08.226289871Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:35:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:12.217204 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:35:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:12.217606942Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:12.217666469Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:35:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:12.223749814Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/e735c3fa-0a51-46e5-a966-36fa7b59ea52 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:35:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:12.223785369Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:35:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:14.872272 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:35:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:14.872334 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:35:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:18.217330 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:35:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:18.217840709Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:18.217902239Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:35:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:18.224007847Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7a8d8d0e-d8ba-4105-bebf-2482d35c0969 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:35:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:18.224042840Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:35:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:20.233786293Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=5c8e6682-1b2c-4219-85c4-c54b476feff5 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:35:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:20.234002947Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=5c8e6682-1b2c-4219-85c4-c54b476feff5 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:35:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:24.872110 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:35:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:24.872318 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:35:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:26.292468 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:26.292814 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:26.293092 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:26.293122 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:35:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:27.755984 2199 scope.go:115] "RemoveContainer" containerID="6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44" Feb 23 19:35:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:31.518197914Z" level=warning msg="Failed to find container exit file for 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: timed out waiting for the condition" id=f0b3dfea-f9e6-45be-a246-6002e9660888 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:35:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:33.235096526Z" level=info msg="NetworkStart: stopping network for sandbox 30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:33.235218819Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/8a979ece-0a77-4fa2-a44f-e7ef5574f1ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:35:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:33.235272122Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:35:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:33.235280485Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:35:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:33.235286978Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:35:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:34.872407 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:35:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:34.872472 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:35:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:34.872500 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:35:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:34.872969 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:35:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:34.873136 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec" gracePeriod=30 Feb 23 19:35:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:34.873419639Z" level=info msg="Stopping container: 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec (timeout: 30s)" id=db4916c5-6bd2-4e58-b4b9-d88a63e89fd4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.243459943Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.243509955Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 systemd[1]: run-utsns-72239026\x2d7e9a\x2d446d\x2da8c9\x2dfe5026d19f58.mount: Deactivated successfully. Feb 23 19:35:35 ip-10-0-136-68 systemd[1]: run-ipcns-72239026\x2d7e9a\x2d446d\x2da8c9\x2dfe5026d19f58.mount: Deactivated successfully. Feb 23 19:35:35 ip-10-0-136-68 systemd[1]: run-netns-72239026\x2d7e9a\x2d446d\x2da8c9\x2dfe5026d19f58.mount: Deactivated successfully. Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.268358141Z" level=info msg="runSandbox: deleting pod ID 25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212 from idIndex" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.268407636Z" level=info msg="runSandbox: removing pod sandbox 25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.268455203Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.268482038Z" level=info msg="runSandbox: unmounting shmPath for sandbox 25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212-userdata-shm.mount: Deactivated successfully. Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.273323461Z" level=info msg="runSandbox: removing pod sandbox from storage: 25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.275075315Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.275111055Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=705bf15e-4f16-4b3f-8cac-9b1d846ae050 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:35.275389 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:35:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:35.275444 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:35:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:35.275470 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:35:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:35.275529 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(25deb1ead31155bd4bf39dfbd684fd48841da093bc6a8770453aac658efa7212): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.285762950Z" level=warning msg="Failed to find container exit file for 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: timed out waiting for the condition" id=03809314-fc4f-4dfb-b010-c13e70eb74c2 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.292308564Z" level=info msg="Removing container: 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44" id=4014eebf-ddcf-4a48-a06f-3bc516ec62ab name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:35:35 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-fce053f7abbf47dce83fad8f6158e1566254e75b8cae827d6ec1a2d8a91b1159-merged.mount: Deactivated successfully. Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.332113121Z" level=info msg="Removed container 6d20420ea35b9cc04246181776dc176d2cfe3fc5ed4714808450a68134f3bc44: openshift-debug-n5lxf/ip-10-0-136-68us-west-2computeinternal-debug/container-00" id=4014eebf-ddcf-4a48-a06f-3bc516ec62ab name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.333130161Z" level=info msg="Stopping pod sandbox: ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b" id=949c69df-859c-45b8-bade-9a55e39539f8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.333168605Z" level=info msg="Stopped pod sandbox (already stopped): ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b" id=949c69df-859c-45b8-bade-9a55e39539f8 name=/runtime.v1.RuntimeService/StopPodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.333385504Z" level=info msg="Removing pod sandbox: ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b" id=bde6e5cf-eaa2-4c8a-970c-a82ceb4786a7 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 19:35:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:35.335129818Z" level=info msg="Removed pod sandbox: ff50a6bc0c32abb113a766dcb6da215811401f50d35b297d15939572fb30668b" id=bde6e5cf-eaa2-4c8a-970c-a82ceb4786a7 name=/runtime.v1.RuntimeService/RemovePodSandbox Feb 23 19:35:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:38.634024533Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=db4916c5-6bd2-4e58-b4b9-d88a63e89fd4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:35:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-26646fed4beb9f809744f5af6c37e7f9788332d659ad2e2996f7e47d73ebfdb9-merged.mount: Deactivated successfully. Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.400965806Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=db4916c5-6bd2-4e58-b4b9-d88a63e89fd4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.404576782Z" level=info msg="Stopped container 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=db4916c5-6bd2-4e58-b4b9-d88a63e89fd4 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.405287015Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=6fce715e-6648-496f-8b6c-4f78cbcdae3b name=/runtime.v1.ImageService/ImageStatus Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.405452134Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6fce715e-6648-496f-8b6c-4f78cbcdae3b name=/runtime.v1.ImageService/ImageStatus Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.406032816Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=1276816d-7cc0-485e-858e-20960f9da1e3 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.406172973Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=1276816d-7cc0-485e-858e-20960f9da1e3 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.406843510Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=5371dd8f-2f88-4925-a043-97d885bd3013 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.406956465Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:35:42 ip-10-0-136-68 systemd[1]: Started crio-conmon-990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113.scope. Feb 23 19:35:42 ip-10-0-136-68 systemd[1]: Started libcontainer container 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113. Feb 23 19:35:42 ip-10-0-136-68 conmon[17423]: conmon 990f4a14943bf48e2bb5 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:35:42 ip-10-0-136-68 systemd[1]: crio-conmon-990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113.scope: Deactivated successfully. Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.529459403Z" level=info msg="Created container 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=5371dd8f-2f88-4925-a043-97d885bd3013 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.529902684Z" level=info msg="Starting container: 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" id=24072ff9-da7f-4b30-80d6-86711e3fd1c0 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.536589058Z" level=info msg="Started container" PID=17435 containerID=990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=24072ff9-da7f-4b30-80d6-86711e3fd1c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:35:42 ip-10-0-136-68 systemd[1]: crio-990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113.scope: Deactivated successfully. Feb 23 19:35:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:42.847067768Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=6e9f7ab1-7517-4f8f-bcb7-49b66fc55df7 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:35:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:46.597209745Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=65da98c4-3eb1-427e-b0c1-fe3cb795db2c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:35:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:46.598358 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec" exitCode=-1 Feb 23 19:35:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:46.598400 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec} Feb 23 19:35:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:46.598435 2199 scope.go:115] "RemoveContainer" containerID="98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" Feb 23 19:35:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:48.217160 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:48.217705810Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:48.217781002Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:48.223532612Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ab4ddcfc-8329-4b93-bf1e-409c8af55bd1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:35:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:48.223569116Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:35:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:50.359039168Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=871c6e0e-07f8-4b1d-81f0-ec1363976a9c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:35:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:51.360938970Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=e7b42225-7e3f-425f-9e97-e2c8235d8c52 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:35:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:53.238157863Z" level=info msg="NetworkStart: stopping network for sandbox 4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:53.238299993Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/3e1beed1-745a-4db0-aaa5-cafbbee38e99 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:35:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:53.238330591Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:35:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:53.238338215Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:35:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:53.238344539Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:35:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:54.121360498Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=349ad94b-1519-4a3a-ae33-c4acc5f45d9b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:35:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:54.121740315Z" level=info msg="Removing container: 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba" id=b0011ec8-9d89-41d8-886d-92b2595bcbcc name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:35:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:55.109312588Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=abc40c59-2567-456d-8c18-c2b9e6fa86f3 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:35:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:35:55.110220 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113} Feb 23 19:35:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:56.292677 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:56.292978 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:56.293213 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:35:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:35:56.293284 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:35:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:57.234892977Z" level=info msg="NetworkStart: stopping network for sandbox 728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:35:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:57.235015893Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/e735c3fa-0a51-46e5-a966-36fa7b59ea52 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:35:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:57.235046201Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:35:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:57.235056809Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:35:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:57.235066231Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:35:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:57.872140042Z" level=warning msg="Failed to find container exit file for 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: timed out waiting for the condition" id=b0011ec8-9d89-41d8-886d-92b2595bcbcc name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:35:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:35:57.884930962Z" level=info msg="Removed container 98f90f5c8a2351bc19612c9c53076f7aff4dd02c3287d23927b0f52af8cae9ba: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b0011ec8-9d89-41d8-886d-92b2595bcbcc name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:36:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:01.876978863Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=bb697eab-0918-4ec4-95e6-630f9dc16ba9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:36:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:03.235721983Z" level=info msg="NetworkStart: stopping network for sandbox cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:03.235872285Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7a8d8d0e-d8ba-4105-bebf-2482d35c0969 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:36:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:03.235911160Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:36:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:03.235923063Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:36:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:03.235933441Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:36:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:04.872329 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:36:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:04.872386 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:36:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:05.217533 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:05.217900 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:05.218160 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:05.218200 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:36:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:14.872629 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:36:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:14.872694 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.245961085Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.246007223Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 systemd[1]: run-utsns-8a979ece\x2d0a77\x2d4fa2\x2da44f\x2de7ef5574f1ad.mount: Deactivated successfully. Feb 23 19:36:18 ip-10-0-136-68 systemd[1]: run-ipcns-8a979ece\x2d0a77\x2d4fa2\x2da44f\x2de7ef5574f1ad.mount: Deactivated successfully. Feb 23 19:36:18 ip-10-0-136-68 systemd[1]: run-netns-8a979ece\x2d0a77\x2d4fa2\x2da44f\x2de7ef5574f1ad.mount: Deactivated successfully. Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.277340608Z" level=info msg="runSandbox: deleting pod ID 30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b from idIndex" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.277378626Z" level=info msg="runSandbox: removing pod sandbox 30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.277407714Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.277435770Z" level=info msg="runSandbox: unmounting shmPath for sandbox 30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b-userdata-shm.mount: Deactivated successfully. Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.283313184Z" level=info msg="runSandbox: removing pod sandbox from storage: 30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.284913654Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:18.284942247Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=48fa2d5a-0722-4d3f-9b71-add0db7a50cf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:18.285129 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:36:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:18.285178 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:36:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:18.285211 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:36:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:18.285304 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(30417a5c59e8549cb31b3cece242edaf3a34ec04c427489e8886eda886b3ab9b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:36:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:24.872639 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:36:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:24.872705 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:36:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:26.292356 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:26.292632 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:26.292837 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:26.292871 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:36:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:30.217423 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:36:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:30.217892778Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:30.217964454Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:36:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:30.224263620Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4b192fa0-0642-44f8-825b-4fc546af5487 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:36:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:30.224300879Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:33.237139572Z" level=info msg="NetworkStart: stopping network for sandbox ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:33.237467515Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ab4ddcfc-8329-4b93-bf1e-409c8af55bd1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:33.237499477Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:33.237507086Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:36:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:33.237514403Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:36:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:34.872510 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:36:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:34.872572 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.247865997Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.247909068Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 systemd[1]: run-utsns-3e1beed1\x2d745a\x2d4db0\x2daaa5\x2dcafbbee38e99.mount: Deactivated successfully. Feb 23 19:36:38 ip-10-0-136-68 systemd[1]: run-ipcns-3e1beed1\x2d745a\x2d4db0\x2daaa5\x2dcafbbee38e99.mount: Deactivated successfully. Feb 23 19:36:38 ip-10-0-136-68 systemd[1]: run-netns-3e1beed1\x2d745a\x2d4db0\x2daaa5\x2dcafbbee38e99.mount: Deactivated successfully. Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.284339622Z" level=info msg="runSandbox: deleting pod ID 4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332 from idIndex" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.284375103Z" level=info msg="runSandbox: removing pod sandbox 4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.284408396Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.284421407Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332-userdata-shm.mount: Deactivated successfully. Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.289312981Z" level=info msg="runSandbox: removing pod sandbox from storage: 4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.290866215Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:38.290895815Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=e99f4e55-1aec-4637-a662-ca7ef72fa1ba name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:38.291079 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:36:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:38.291127 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:36:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:38.291150 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:36:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:38.291201 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4a2cff579fb0efe5446a94dc4d33892a38f70ccc89f0fe0bdef6e9c68d080332): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.247064083Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.247111276Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 systemd[1]: run-utsns-e735c3fa\x2d0a51\x2d46e5\x2da966\x2d36fa7b59ea52.mount: Deactivated successfully. Feb 23 19:36:42 ip-10-0-136-68 systemd[1]: run-ipcns-e735c3fa\x2d0a51\x2d46e5\x2da966\x2d36fa7b59ea52.mount: Deactivated successfully. Feb 23 19:36:42 ip-10-0-136-68 systemd[1]: run-netns-e735c3fa\x2d0a51\x2d46e5\x2da966\x2d36fa7b59ea52.mount: Deactivated successfully. Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.263317940Z" level=info msg="runSandbox: deleting pod ID 728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3 from idIndex" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.263356429Z" level=info msg="runSandbox: removing pod sandbox 728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.263384116Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.263400124Z" level=info msg="runSandbox: unmounting shmPath for sandbox 728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3-userdata-shm.mount: Deactivated successfully. Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.271302696Z" level=info msg="runSandbox: removing pod sandbox from storage: 728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.272800104Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:42.272834578Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=bea10902-203d-4d7d-beaf-8cdb3c458582 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:42.273058 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:36:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:42.273118 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:36:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:42.273156 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:36:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:42.273213 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(728eb0b0254c773334ed1d252e98c6e6ad8a4ddf215786a216f172242ebfdef3): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:36:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:44.872665 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:36:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:44.872731 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:36:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:44.872758 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:36:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:44.873351 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:36:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:44.873517 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" gracePeriod=30 Feb 23 19:36:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:44.873772635Z" level=info msg="Stopping container: 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113 (timeout: 30s)" id=df2a6c79-f7e6-44e4-9637-a19340a77fb0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.246033682Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.246082793Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 systemd[1]: run-utsns-7a8d8d0e\x2dd8ba\x2d4105\x2dbebf\x2d2482d35c0969.mount: Deactivated successfully. Feb 23 19:36:48 ip-10-0-136-68 systemd[1]: run-ipcns-7a8d8d0e\x2dd8ba\x2d4105\x2dbebf\x2d2482d35c0969.mount: Deactivated successfully. Feb 23 19:36:48 ip-10-0-136-68 systemd[1]: run-netns-7a8d8d0e\x2dd8ba\x2d4105\x2dbebf\x2d2482d35c0969.mount: Deactivated successfully. Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.276324539Z" level=info msg="runSandbox: deleting pod ID cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375 from idIndex" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.276365597Z" level=info msg="runSandbox: removing pod sandbox cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.276409283Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.276428797Z" level=info msg="runSandbox: unmounting shmPath for sandbox cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375-userdata-shm.mount: Deactivated successfully. Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.282300651Z" level=info msg="runSandbox: removing pod sandbox from storage: cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.283844614Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.283879631Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=044ed0dc-a8d5-42cf-a5a7-662290bc1c1d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:48.284059 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:36:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:48.284113 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:36:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:48.284136 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:36:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:48.284197 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(cfbd51922ba70de4f13f2e12216a19519da4654aa051a94232bc425c575b4375): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:36:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:48.635143127Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=df2a6c79-f7e6-44e4-9637-a19340a77fb0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:36:48 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c19434be0682150e09f7d42becc651bc2377b148f439b0453e53d721dbf02aa8-merged.mount: Deactivated successfully. Feb 23 19:36:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:49.216629 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:36:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:49.217003931Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:49.217117762Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:36:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:49.222428183Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/77e97191-c9fe-426b-b4cd-53bcebf88629 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:36:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:49.222451929Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:36:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:52.404016703Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=df2a6c79-f7e6-44e4-9637-a19340a77fb0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:36:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:52.405750765Z" level=info msg="Stopped container 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=df2a6c79-f7e6-44e4-9637-a19340a77fb0 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:36:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:52.406358 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:36:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:52.708985436Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=72b300f3-99bc-4929-a80c-7608c240d663 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:36:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:53.217357 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:36:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:53.217749065Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:36:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:53.217805814Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:36:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:53.222861910Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/241497f0-96a8-42b8-8cc0-b6430e108201 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:36:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:53.222886263Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:36:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:56.292503 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:56.292784 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:56.292996 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:36:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:56.293025 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:36:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:36:56.460010749Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=b0288ef7-a432-4458-8158-ba5cb190effb name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:36:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:56.460829 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" exitCode=-1 Feb 23 19:36:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:56.460865 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113} Feb 23 19:36:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:56.460888 2199 scope.go:115] "RemoveContainer" containerID="58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec" Feb 23 19:36:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:36:57.463326 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:36:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:36:57.463713 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:37:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:00.221271105Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=4fcf866c-f0e5-44b9-a2a1-7c87f007f718 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:37:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:03.969903920Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=9df26e5a-1dec-4aac-a80a-24600866e0d9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:37:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:03.970462226Z" level=info msg="Removing container: 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec" id=c4e6c7f3-0542-4d80-9af7-c71b4a6d8527 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:37:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:37:04.217031 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:37:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:04.217408698Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:04.217469046Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:37:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:04.223120222Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/d271976e-cfef-4be3-8fdc-320bc3cf623c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:37:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:04.223146901Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:37:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:07.730204259Z" level=warning msg="Failed to find container exit file for 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: timed out waiting for the condition" id=c4e6c7f3-0542-4d80-9af7-c71b4a6d8527 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:37:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:07.744065505Z" level=info msg="Removed container 58a83a2ef3cfae2389bb16d39319c61c62ca2eb5970ef07fd9324b34c5d5a5ec: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c4e6c7f3-0542-4d80-9af7-c71b4a6d8527 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:37:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:37:11.216759 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:37:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:11.217148 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:37:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:12.240942767Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=17c21d7a-4aae-4446-b797-1d1f6526f8f2 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:37:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:15.236486483Z" level=info msg="NetworkStart: stopping network for sandbox dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:15.236607681Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4b192fa0-0642-44f8-825b-4fc546af5487 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:37:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:15.236639121Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:37:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:15.236649512Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:37:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:15.236656063Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.247406586Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.247455784Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 systemd[1]: run-utsns-ab4ddcfc\x2d8329\x2d4b93\x2dbf1e\x2d409c8af55bd1.mount: Deactivated successfully. Feb 23 19:37:18 ip-10-0-136-68 systemd[1]: run-ipcns-ab4ddcfc\x2d8329\x2d4b93\x2dbf1e\x2d409c8af55bd1.mount: Deactivated successfully. Feb 23 19:37:18 ip-10-0-136-68 systemd[1]: run-netns-ab4ddcfc\x2d8329\x2d4b93\x2dbf1e\x2d409c8af55bd1.mount: Deactivated successfully. Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.268323852Z" level=info msg="runSandbox: deleting pod ID ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573 from idIndex" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.268363569Z" level=info msg="runSandbox: removing pod sandbox ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.268396301Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.268414455Z" level=info msg="runSandbox: unmounting shmPath for sandbox ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573-userdata-shm.mount: Deactivated successfully. Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.274317920Z" level=info msg="runSandbox: removing pod sandbox from storage: ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.275813643Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:18.275841926Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=b765ab4a-9380-44e0-ba1a-484befe7534c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:18.276002 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:37:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:18.276051 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:37:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:18.276074 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:37:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:18.276139 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ff1837792ab3c69440f035f1e0df437eaed7e6030f0c94359323ffea9520a573): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:37:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:37:25.217412 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:37:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:25.218008 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:37:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:26.292477 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:26.292725 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:26.292944 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:26.292983 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:37:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:29.217607 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:29.217894 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:29.218131 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:29.218157 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:37:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:37:33.217023 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:37:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:33.217483496Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:33.217542825Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:37:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:33.222672157Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/19e7f057-6f69-4f0c-8e1f-9837b912ebfb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:37:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:33.222699822Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:37:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:34.233519654Z" level=info msg="NetworkStart: stopping network for sandbox 618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:34.233645236Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/77e97191-c9fe-426b-b4cd-53bcebf88629 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:37:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:34.233682434Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:37:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:34.233694058Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:37:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:34.233704347Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:37:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:38.233598616Z" level=info msg="NetworkStart: stopping network for sandbox a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:38.233709337Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/241497f0-96a8-42b8-8cc0-b6430e108201 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:37:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:38.233737447Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:37:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:38.233744345Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:37:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:38.233750647Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:37:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:37:40.216743 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:37:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:40.217345 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:37:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:49.234683842Z" level=info msg="NetworkStart: stopping network for sandbox dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:37:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:49.234810837Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/d271976e-cfef-4be3-8fdc-320bc3cf623c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:37:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:49.234838439Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:37:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:49.234846150Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:37:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:37:49.234853051Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:37:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:37:52.217431 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:37:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:52.217853 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:37:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:56.292543 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:56.292856 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:56.293095 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:37:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:37:56.293128 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.245721896Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.245771045Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 systemd[1]: run-utsns-4b192fa0\x2d0642\x2d44f8\x2d825b\x2d4fc546af5487.mount: Deactivated successfully. Feb 23 19:38:00 ip-10-0-136-68 systemd[1]: run-ipcns-4b192fa0\x2d0642\x2d44f8\x2d825b\x2d4fc546af5487.mount: Deactivated successfully. Feb 23 19:38:00 ip-10-0-136-68 systemd[1]: run-netns-4b192fa0\x2d0642\x2d44f8\x2d825b\x2d4fc546af5487.mount: Deactivated successfully. Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.273338099Z" level=info msg="runSandbox: deleting pod ID dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc from idIndex" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.273377763Z" level=info msg="runSandbox: removing pod sandbox dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.273421653Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.273437819Z" level=info msg="runSandbox: unmounting shmPath for sandbox dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc-userdata-shm.mount: Deactivated successfully. Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.278307251Z" level=info msg="runSandbox: removing pod sandbox from storage: dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.279867415Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:00.279904779Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1240b0c8-97d5-4de6-9ea6-74bd394b774a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:00.280124 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:38:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:00.280191 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:38:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:00.280233 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:38:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:00.280339 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(dc4a0f8ccecd377bc2760c91c4a803b360f1d0b9cb38c23bdcb2d582064b99dc): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:38:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:06.217519 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:38:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:06.218152 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:38:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:15.217341 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:15.217771823Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:15.217838527Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:15.227501900Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/88b78cb9-c02d-4ac1-9fb9-a2f8acbb90b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:38:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:15.227536500Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:38:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:18.216844 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:38:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:18.217463 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:38:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:18.234436207Z" level=info msg="NetworkStart: stopping network for sandbox abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:18.234564755Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/19e7f057-6f69-4f0c-8e1f-9837b912ebfb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:38:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:18.234604644Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:38:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:18.234615933Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:38:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:18.234626488Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.242782262Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.242839477Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 systemd[1]: run-utsns-77e97191\x2dc9fe\x2d426b\x2db4cd\x2d53bcebf88629.mount: Deactivated successfully. Feb 23 19:38:19 ip-10-0-136-68 systemd[1]: run-ipcns-77e97191\x2dc9fe\x2d426b\x2db4cd\x2d53bcebf88629.mount: Deactivated successfully. Feb 23 19:38:19 ip-10-0-136-68 systemd[1]: run-netns-77e97191\x2dc9fe\x2d426b\x2db4cd\x2d53bcebf88629.mount: Deactivated successfully. Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.259338642Z" level=info msg="runSandbox: deleting pod ID 618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065 from idIndex" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.259383269Z" level=info msg="runSandbox: removing pod sandbox 618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.259429705Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.259444096Z" level=info msg="runSandbox: unmounting shmPath for sandbox 618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065-userdata-shm.mount: Deactivated successfully. Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.265320924Z" level=info msg="runSandbox: removing pod sandbox from storage: 618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.266901128Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:19.266930849Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=1ab24ac6-2628-405d-8278-e2ba39a41d95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:19.267189 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:38:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:19.267281 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:38:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:19.267322 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:38:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:19.267402 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(618e8c61e66b117f4a4c344770ac8e4c5f1822e022c8c0ca53324c68f63e0065): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.243286425Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.243333799Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 systemd[1]: run-utsns-241497f0\x2d96a8\x2d42b8\x2d8cc0\x2db6430e108201.mount: Deactivated successfully. Feb 23 19:38:23 ip-10-0-136-68 systemd[1]: run-ipcns-241497f0\x2d96a8\x2d42b8\x2d8cc0\x2db6430e108201.mount: Deactivated successfully. Feb 23 19:38:23 ip-10-0-136-68 systemd[1]: run-netns-241497f0\x2d96a8\x2d42b8\x2d8cc0\x2db6430e108201.mount: Deactivated successfully. Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.265330780Z" level=info msg="runSandbox: deleting pod ID a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39 from idIndex" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.265373818Z" level=info msg="runSandbox: removing pod sandbox a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.265407014Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.265419865Z" level=info msg="runSandbox: unmounting shmPath for sandbox a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39-userdata-shm.mount: Deactivated successfully. Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.271315783Z" level=info msg="runSandbox: removing pod sandbox from storage: a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.272908544Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:23.272942584Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=56209913-3a61-4c7d-9bef-0a6b2f3bde7e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:23.273176 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:38:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:23.273315 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:38:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:23.273355 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:38:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:23.273441 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a8a8f43ff940ad65c220f84e17de59a030831dbd8cccafd3f72c22c547ca8e39): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:26.292636 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:26.292882 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:26.293169 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:26.293212 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:38:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:32.216884 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:32.217387261Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:32.217445031Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:32.223429164Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8274401d-44ba-4801-b1af-d67e091ca4ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:38:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:32.223456039Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:38:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:33.217138 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:38:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:33.217411 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:33.217701 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:38:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:33.217774 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:33.218042 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:33.218068 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.243740167Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.243792708Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 systemd[1]: run-utsns-d271976e\x2dcfef\x2d4be3\x2d8fdc\x2d320bc3cf623c.mount: Deactivated successfully. Feb 23 19:38:34 ip-10-0-136-68 systemd[1]: run-ipcns-d271976e\x2dcfef\x2d4be3\x2d8fdc\x2d320bc3cf623c.mount: Deactivated successfully. Feb 23 19:38:34 ip-10-0-136-68 systemd[1]: run-netns-d271976e\x2dcfef\x2d4be3\x2d8fdc\x2d320bc3cf623c.mount: Deactivated successfully. Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.263325833Z" level=info msg="runSandbox: deleting pod ID dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e from idIndex" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.263361319Z" level=info msg="runSandbox: removing pod sandbox dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.263388534Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.263401682Z" level=info msg="runSandbox: unmounting shmPath for sandbox dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e-userdata-shm.mount: Deactivated successfully. Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.269328230Z" level=info msg="runSandbox: removing pod sandbox from storage: dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.270908404Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:34.270939634Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=c79beee0-65fa-43ec-8fea-f956b2aa6de2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:34.271153 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:38:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:34.271207 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:38:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:34.271236 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:38:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:34.271337 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dfc9d7931fbaa5b94c2d5abe37cccab3d90fd856a04710b73feb8c5ba3568a3e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:38:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:38.217169 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:38:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:38.217641852Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:38.217706582Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:38:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:38.223547391Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/b6ad2a1e-0b4c-403f-9532-656e4393ff2d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:38:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:38.223582695Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:38:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:44.217416 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:38:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:44.217797 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:38:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:45.217073 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:38:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:45.217470639Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:38:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:45.217525394Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:38:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:45.222808717Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/363ad177-e60d-4a38-a8b0-a35319e71d60 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:38:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:38:45.222831202Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:38:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:56.292677 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:56.292986 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:56.293182 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:38:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:56.293212 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:38:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:38:59.216996 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:38:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:38:59.217402 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:00.240629932Z" level=info msg="NetworkStart: stopping network for sandbox a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:00.240744039Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/88b78cb9-c02d-4ac1-9fb9-a2f8acbb90b6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:00.240773218Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:00.240780611Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:39:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:00.240788876Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.244686949Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.244737130Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 systemd[1]: run-utsns-19e7f057\x2d6f69\x2d4f0c\x2d8e1f\x2d9837b912ebfb.mount: Deactivated successfully. Feb 23 19:39:03 ip-10-0-136-68 systemd[1]: run-ipcns-19e7f057\x2d6f69\x2d4f0c\x2d8e1f\x2d9837b912ebfb.mount: Deactivated successfully. Feb 23 19:39:03 ip-10-0-136-68 systemd[1]: run-netns-19e7f057\x2d6f69\x2d4f0c\x2d8e1f\x2d9837b912ebfb.mount: Deactivated successfully. Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.266320159Z" level=info msg="runSandbox: deleting pod ID abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e from idIndex" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.266362518Z" level=info msg="runSandbox: removing pod sandbox abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.266392474Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.266404420Z" level=info msg="runSandbox: unmounting shmPath for sandbox abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e-userdata-shm.mount: Deactivated successfully. Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.271336398Z" level=info msg="runSandbox: removing pod sandbox from storage: abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.272893030Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:03.272928064Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=a06dfabe-8712-42da-8fab-9cf17e288c03 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:03.273160 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:39:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:03.273231 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:39:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:03.273290 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:39:03 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:03.273376 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(abb5c0fd78aa80d446b6fb3822ec267dcf3904099fb3321a89d888b9980be02e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:39:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:39:11.216986 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:39:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:11.217545 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:17.235695895Z" level=info msg="NetworkStart: stopping network for sandbox b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:17.235842880Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/8274401d-44ba-4801-b1af-d67e091ca4ad Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:17.235886566Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:17.235898549Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:39:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:17.235908791Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:39:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:39:18.217533 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:39:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:18.217968933Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:18.218037832Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:39:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:18.223426557Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/010cd808-07c5-4c9b-9cbb-e59651fb17fc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:39:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:18.223453159Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:39:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:39:22.216424 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:39:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:22.216808 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:39:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:23.236608561Z" level=info msg="NetworkStart: stopping network for sandbox 8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:23.236742538Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/b6ad2a1e-0b4c-403f-9532-656e4393ff2d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:39:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:23.236783137Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:39:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:23.236795104Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:39:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:23.236805493Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:39:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:26.291962 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:39:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:26.292227 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:39:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:26.292472 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:39:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:26.292508 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:39:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:30.234361077Z" level=info msg="NetworkStart: stopping network for sandbox 92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:30.234483814Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/363ad177-e60d-4a38-a8b0-a35319e71d60 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:39:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:30.234514643Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:39:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:30.234526109Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:39:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:30.234534787Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:39:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:39:37.217095 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:39:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:37.217697 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.250475249Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.250525952Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 systemd[1]: run-utsns-88b78cb9\x2dc02d\x2d4ac1\x2d9fb9\x2da2f8acbb90b6.mount: Deactivated successfully. Feb 23 19:39:45 ip-10-0-136-68 systemd[1]: run-ipcns-88b78cb9\x2dc02d\x2d4ac1\x2d9fb9\x2da2f8acbb90b6.mount: Deactivated successfully. Feb 23 19:39:45 ip-10-0-136-68 systemd[1]: run-netns-88b78cb9\x2dc02d\x2d4ac1\x2d9fb9\x2da2f8acbb90b6.mount: Deactivated successfully. Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.273333441Z" level=info msg="runSandbox: deleting pod ID a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1 from idIndex" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.273377746Z" level=info msg="runSandbox: removing pod sandbox a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.273416820Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.273435826Z" level=info msg="runSandbox: unmounting shmPath for sandbox a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1-userdata-shm.mount: Deactivated successfully. Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.285326730Z" level=info msg="runSandbox: removing pod sandbox from storage: a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.286882082Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:39:45.286911160Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=30727119-e3af-49c7-921b-d7d7747761fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:39:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:45.287127 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:39:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:45.287192 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:39:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:45.287232 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:39:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:45.287330 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a0d2d1c20cf0e4df39e8f2c2bc6832f0c01ae0a832367b938e81dce2b2dd55e1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:39:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:39:50.217332 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:39:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:50.218031 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:56.292182 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:56.292514 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:56.292746 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:39:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:39:56.292778 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:40:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:00.216924 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:40:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:00.217397376Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:00.217463643Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:40:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:00.222858043Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/953f7ee6-3a29-4b9a-87b6-25934e17995b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:40:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:00.222886921Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:02.217448 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:02.217768 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:02.217996 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:02.218058 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.245949320Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.245999819Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 systemd[1]: run-utsns-8274401d\x2d44ba\x2d4801\x2db1af\x2dd67e091ca4ad.mount: Deactivated successfully. Feb 23 19:40:02 ip-10-0-136-68 systemd[1]: run-ipcns-8274401d\x2d44ba\x2d4801\x2db1af\x2dd67e091ca4ad.mount: Deactivated successfully. Feb 23 19:40:02 ip-10-0-136-68 systemd[1]: run-netns-8274401d\x2d44ba\x2d4801\x2db1af\x2dd67e091ca4ad.mount: Deactivated successfully. Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.269326303Z" level=info msg="runSandbox: deleting pod ID b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0 from idIndex" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.269362615Z" level=info msg="runSandbox: removing pod sandbox b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.269393828Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.269408401Z" level=info msg="runSandbox: unmounting shmPath for sandbox b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0-userdata-shm.mount: Deactivated successfully. Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.283320560Z" level=info msg="runSandbox: removing pod sandbox from storage: b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.284887640Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:02.284920046Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=95515a07-ff48-44ef-aa74-a28eb6af2ecc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:02.285138 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:02.285191 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:02.285212 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:40:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:02.285347 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(b9c399b45d8b20dc65c335ae32987edea07ddb2a898b48ccf36892ebef2748d0): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:40:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:03.235726200Z" level=info msg="NetworkStart: stopping network for sandbox 9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:03.235851110Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/010cd808-07c5-4c9b-9cbb-e59651fb17fc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:40:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:03.235881685Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:40:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:03.235892835Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:40:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:03.235899567Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:40:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:04.216713 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:40:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:04.217108 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.246013177Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.246052019Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 systemd[1]: run-utsns-b6ad2a1e\x2d0b4c\x2d403f\x2d9532\x2d656e4393ff2d.mount: Deactivated successfully. Feb 23 19:40:08 ip-10-0-136-68 systemd[1]: run-ipcns-b6ad2a1e\x2d0b4c\x2d403f\x2d9532\x2d656e4393ff2d.mount: Deactivated successfully. Feb 23 19:40:08 ip-10-0-136-68 systemd[1]: run-netns-b6ad2a1e\x2d0b4c\x2d403f\x2d9532\x2d656e4393ff2d.mount: Deactivated successfully. Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.273334612Z" level=info msg="runSandbox: deleting pod ID 8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d from idIndex" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.273366167Z" level=info msg="runSandbox: removing pod sandbox 8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.273393632Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.273412626Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d-userdata-shm.mount: Deactivated successfully. Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.279328848Z" level=info msg="runSandbox: removing pod sandbox from storage: 8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.280868695Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:08.280904577Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=9a806cc9-688a-4064-9380-235695944c22 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:08.281109 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:40:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:08.281158 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:40:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:08.281185 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:40:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:08.281238 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8b448733f38a7be8656680ae4cd003631ab38111af5aaf2a918fbab6af47bb1d): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.243346427Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.243400058Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 systemd[1]: run-utsns-363ad177\x2de60d\x2d4a38\x2da8b0\x2da35319e71d60.mount: Deactivated successfully. Feb 23 19:40:15 ip-10-0-136-68 systemd[1]: run-ipcns-363ad177\x2de60d\x2d4a38\x2da8b0\x2da35319e71d60.mount: Deactivated successfully. Feb 23 19:40:15 ip-10-0-136-68 systemd[1]: run-netns-363ad177\x2de60d\x2d4a38\x2da8b0\x2da35319e71d60.mount: Deactivated successfully. Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.259319151Z" level=info msg="runSandbox: deleting pod ID 92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a from idIndex" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.259370891Z" level=info msg="runSandbox: removing pod sandbox 92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.259402480Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.259415079Z" level=info msg="runSandbox: unmounting shmPath for sandbox 92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a-userdata-shm.mount: Deactivated successfully. Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.265320857Z" level=info msg="runSandbox: removing pod sandbox from storage: 92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.266921633Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:15.266957876Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=83e293d5-7413-409b-8712-1dea5fb0d133 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:15.267176 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:40:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:15.267233 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:40:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:15.267301 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:40:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:15.267630 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(92616ff85a899bd99241372e3e651841dc90b04389a9039e12d6957b11645c4a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:40:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:16.216983 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:40:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:16.217441531Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:16.217507789Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:40:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:16.223235518Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/e6585e4d-e481-4298-ad29-344a59e868af Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:40:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:16.223300887Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:40:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:17.217220 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:40:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:17.217815 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:40:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:20.236893547Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=4cd1d21b-935b-42a0-b8a4-8db7dec2e50a name=/runtime.v1.ImageService/ImageStatus Feb 23 19:40:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:20.237078296Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=4cd1d21b-935b-42a0-b8a4-8db7dec2e50a name=/runtime.v1.ImageService/ImageStatus Feb 23 19:40:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:21.216630 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:40:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:21.217202224Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:21.217293672Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:40:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:21.223144565Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/fab817ba-d664-4c6b-be87-8b2a3149bfee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:40:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:21.223171131Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:40:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:26.292476 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:26.292703 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:26.292902 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:26.292947 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:40:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:28.216834 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:40:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:28.217295422Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:28.217368649Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:40:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:28.223161397Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2ab22ab9-28a9-4ab2-8e5e-f7fd19498286 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:40:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:28.223186795Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:40:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:30.216707 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:40:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:30.217289 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:40:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:41.217279 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:40:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:41.217846 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:40:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:45.234652893Z" level=info msg="NetworkStart: stopping network for sandbox b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:45.234776578Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/953f7ee6-3a29-4b9a-87b6-25934e17995b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:40:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:45.234806877Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:40:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:45.234814761Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:40:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:45.234847773Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.244765904Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.244818234Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 systemd[1]: run-utsns-010cd808\x2d07c5\x2d4c9b\x2d9cbb\x2de59651fb17fc.mount: Deactivated successfully. Feb 23 19:40:48 ip-10-0-136-68 systemd[1]: run-ipcns-010cd808\x2d07c5\x2d4c9b\x2d9cbb\x2de59651fb17fc.mount: Deactivated successfully. Feb 23 19:40:48 ip-10-0-136-68 systemd[1]: run-netns-010cd808\x2d07c5\x2d4c9b\x2d9cbb\x2de59651fb17fc.mount: Deactivated successfully. Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.279323884Z" level=info msg="runSandbox: deleting pod ID 9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482 from idIndex" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.279361756Z" level=info msg="runSandbox: removing pod sandbox 9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.279395297Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.279423618Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482-userdata-shm.mount: Deactivated successfully. Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.286303416Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.287886809Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:40:48.287915425Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=028f4ece-ad38-4fa9-97f2-34455bf58d62 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:40:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:48.288087 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:40:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:48.288132 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:40:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:48.288157 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:40:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:48.288214 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(9b10eb05dd49c9fbb352f169bd9072b29fdbdef83586143d934baecf3a02a482): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:40:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:40:56.216789 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:56.217412 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:56.291901 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:56.292177 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:56.292433 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:40:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:40:56.292462 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:41:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:01.234510795Z" level=info msg="NetworkStart: stopping network for sandbox e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:01.234640513Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/e6585e4d-e481-4298-ad29-344a59e868af Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:41:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:01.234669816Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:41:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:01.234679109Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:41:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:01.234689318Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:41:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:41:02.217438 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:41:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:02.217918178Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:02.217995634Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:41:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:02.223379730Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/160eac9e-a1a7-43fe-88d4-5b4bc700f25a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:41:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:02.223407057Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:06.235110115Z" level=info msg="NetworkStart: stopping network for sandbox 40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:06.235239484Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/fab817ba-d664-4c6b-be87-8b2a3149bfee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:06.235301648Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:06.235314015Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:41:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:06.235324590Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:41:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:41:10.217114 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:41:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:10.217726 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:41:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:13.236647474Z" level=info msg="NetworkStart: stopping network for sandbox c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:13.236768590Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2ab22ab9-28a9-4ab2-8e5e-f7fd19498286 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:41:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:13.236797580Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:41:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:13.236809765Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:41:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:13.236822190Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:41:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:16.217896 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:16.218430 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:16.218717 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:16.218754 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:41:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:41:22.217308 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:41:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:22.217880 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:41:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:26.291706 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:26.291977 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:26.292203 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:26.292239 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.243929060Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.243977590Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 systemd[1]: run-utsns-953f7ee6\x2d3a29\x2d4b9a\x2d87b6\x2d25934e17995b.mount: Deactivated successfully. Feb 23 19:41:30 ip-10-0-136-68 systemd[1]: run-ipcns-953f7ee6\x2d3a29\x2d4b9a\x2d87b6\x2d25934e17995b.mount: Deactivated successfully. Feb 23 19:41:30 ip-10-0-136-68 systemd[1]: run-netns-953f7ee6\x2d3a29\x2d4b9a\x2d87b6\x2d25934e17995b.mount: Deactivated successfully. Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.266337719Z" level=info msg="runSandbox: deleting pod ID b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f from idIndex" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.266384532Z" level=info msg="runSandbox: removing pod sandbox b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.266424583Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.266445953Z" level=info msg="runSandbox: unmounting shmPath for sandbox b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f-userdata-shm.mount: Deactivated successfully. Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.279323112Z" level=info msg="runSandbox: removing pod sandbox from storage: b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.280974015Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:30.281004776Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0be220b4-6775-480c-8e3f-a5cebe382c20 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:30.281229 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:41:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:30.281349 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:41:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:30.281377 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:41:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:30.281442 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(b0168929d92a9b0a6de73a66b8397bebce8d979175083d8c3f8b6888fe95078f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:41:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:41:35.217357 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:41:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:35.217729 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:41:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:41:44.217234 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:44.217709853Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:44.217775917Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:44.223299904Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/922e2685-c2bf-4d38-aaef-1c5f31a61bc2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:41:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:44.223335427Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.243716590Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.243759987Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 systemd[1]: run-utsns-e6585e4d\x2de481\x2d4298\x2dad29\x2d344a59e868af.mount: Deactivated successfully. Feb 23 19:41:46 ip-10-0-136-68 systemd[1]: run-ipcns-e6585e4d\x2de481\x2d4298\x2dad29\x2d344a59e868af.mount: Deactivated successfully. Feb 23 19:41:46 ip-10-0-136-68 systemd[1]: run-netns-e6585e4d\x2de481\x2d4298\x2dad29\x2d344a59e868af.mount: Deactivated successfully. Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.267320428Z" level=info msg="runSandbox: deleting pod ID e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed from idIndex" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.267356750Z" level=info msg="runSandbox: removing pod sandbox e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.267383012Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.267394993Z" level=info msg="runSandbox: unmounting shmPath for sandbox e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed-userdata-shm.mount: Deactivated successfully. Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.272311730Z" level=info msg="runSandbox: removing pod sandbox from storage: e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.273885352Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:46.273913281Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=4d00f8e6-69e0-4624-b06e-1654e32024ab name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:46.274086 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:41:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:46.274150 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:41:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:46.274189 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:41:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:46.274291 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e9eae08bc2ba8157179c8f262e04020fcb0b590d71ab993c7624cdcb7b2877ed): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:41:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:47.236629995Z" level=info msg="NetworkStart: stopping network for sandbox 8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:47.236753342Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/160eac9e-a1a7-43fe-88d4-5b4bc700f25a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:41:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:47.236781701Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:41:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:47.236789823Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:41:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:47.236797880Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:41:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:41:48.217283 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.218076059Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=abb14bb1-0aac-401c-9727-75da90871adc name=/runtime.v1.ImageService/ImageStatus Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.218508730Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=abb14bb1-0aac-401c-9727-75da90871adc name=/runtime.v1.ImageService/ImageStatus Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.219084981Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=d8bf37f3-12b3-406e-be74-213b54314775 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.219325205Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=d8bf37f3-12b3-406e-be74-213b54314775 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.220068042Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=65d87dfe-9071-46e8-8969-817b40f74bc9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.220157857Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:41:48 ip-10-0-136-68 systemd[1]: Started crio-conmon-b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786.scope. Feb 23 19:41:48 ip-10-0-136-68 systemd[1]: Started libcontainer container b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786. Feb 23 19:41:48 ip-10-0-136-68 conmon[18131]: conmon b48df4a3b373b068ffc8 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:41:48 ip-10-0-136-68 systemd[1]: crio-conmon-b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786.scope: Deactivated successfully. Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.351915277Z" level=info msg="Created container b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=65d87dfe-9071-46e8-8969-817b40f74bc9 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.352306802Z" level=info msg="Starting container: b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786" id=69ef6ee8-eef8-4838-a0d8-ec416be38cfc name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:41:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:48.370485855Z" level=info msg="Started container" PID=18143 containerID=b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=69ef6ee8-eef8-4838-a0d8-ec416be38cfc name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:41:48 ip-10-0-136-68 systemd[1]: crio-b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786.scope: Deactivated successfully. Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.245785469Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.245831284Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 systemd[1]: run-utsns-fab817ba\x2dd664\x2d4c6b\x2dbe87\x2d8b2a3149bfee.mount: Deactivated successfully. Feb 23 19:41:51 ip-10-0-136-68 systemd[1]: run-ipcns-fab817ba\x2dd664\x2d4c6b\x2dbe87\x2d8b2a3149bfee.mount: Deactivated successfully. Feb 23 19:41:51 ip-10-0-136-68 systemd[1]: run-netns-fab817ba\x2dd664\x2d4c6b\x2dbe87\x2d8b2a3149bfee.mount: Deactivated successfully. Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.273324689Z" level=info msg="runSandbox: deleting pod ID 40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695 from idIndex" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.273357328Z" level=info msg="runSandbox: removing pod sandbox 40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.273389646Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.273413227Z" level=info msg="runSandbox: unmounting shmPath for sandbox 40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695-userdata-shm.mount: Deactivated successfully. Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.279314716Z" level=info msg="runSandbox: removing pod sandbox from storage: 40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.280859348Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:51.280888003Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=e6d54d74-9935-452f-8274-5fc4c2b5b1fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:51.281081 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:41:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:51.281134 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:41:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:51.281167 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:41:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:51.281221 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(40f393caf882dd31d881ca579e2f3b2b8d4f7a7365cabf8bf8d0a93b68327695): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:41:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:52.428930532Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=935e71db-edb6-4991-8fce-1995e1797007 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:41:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:41:52.429900 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786} Feb 23 19:41:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:56.292599 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:56.292888 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:56.293116 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:41:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:56.293148 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.247121717Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.247176019Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 systemd[1]: run-utsns-2ab22ab9\x2d28a9\x2d4ab2\x2d8e5e\x2df7fd19498286.mount: Deactivated successfully. Feb 23 19:41:58 ip-10-0-136-68 systemd[1]: run-ipcns-2ab22ab9\x2d28a9\x2d4ab2\x2d8e5e\x2df7fd19498286.mount: Deactivated successfully. Feb 23 19:41:58 ip-10-0-136-68 systemd[1]: run-netns-2ab22ab9\x2d28a9\x2d4ab2\x2d8e5e\x2df7fd19498286.mount: Deactivated successfully. Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.266327392Z" level=info msg="runSandbox: deleting pod ID c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf from idIndex" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.266362664Z" level=info msg="runSandbox: removing pod sandbox c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.266393123Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.266417758Z" level=info msg="runSandbox: unmounting shmPath for sandbox c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf-userdata-shm.mount: Deactivated successfully. Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.272305104Z" level=info msg="runSandbox: removing pod sandbox from storage: c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.273827477Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:41:58.273856263Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=06905cf9-093c-4c39-968c-663c950e8cb8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:41:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:58.274030 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:41:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:58.274076 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:41:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:58.274099 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:41:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:41:58.274160 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c6df0ea7fddc1ded3838a758580d07bd34532b259502952ba82256faa0ee67cf): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:42:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:00.217328 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:42:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:00.217734268Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:00.217788586Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:42:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:00.223709002Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/1bad56ef-5ae0-4708-ac93-d4c29e1bd886 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:42:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:00.223882827Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:42:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:02.217361 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:42:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:02.217938263Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:02.218009863Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:42:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:02.224348807Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7201f37c-d2b0-456c-93b9-93c9072cdffa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:42:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:02.224383515Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:42:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:04.872419 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:42:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:04.872470 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:42:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:12.216559 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:42:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:12.217012920Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:12.217075547Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:42:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:12.223087574Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/eba30d5e-bdbf-4c72-9ab0-b1a3ae4ae1c0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:42:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:12.223122561Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:42:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:14.872373 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:42:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:14.872571 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:42:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:24.872791 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:42:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:24.872853 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:42:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:26.292352 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:26.292698 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:26.292931 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:26.292966 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:42:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:29.236668283Z" level=info msg="NetworkStart: stopping network for sandbox 27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:29.236786379Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/922e2685-c2bf-4d38-aaef-1c5f31a61bc2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:42:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:29.236814135Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:42:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:29.236821894Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:42:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:29.236829283Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.246926455Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.246974164Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 systemd[1]: run-utsns-160eac9e\x2da1a7\x2d43fe\x2d88d4\x2d5b4bc700f25a.mount: Deactivated successfully. Feb 23 19:42:32 ip-10-0-136-68 systemd[1]: run-ipcns-160eac9e\x2da1a7\x2d43fe\x2d88d4\x2d5b4bc700f25a.mount: Deactivated successfully. Feb 23 19:42:32 ip-10-0-136-68 systemd[1]: run-netns-160eac9e\x2da1a7\x2d43fe\x2d88d4\x2d5b4bc700f25a.mount: Deactivated successfully. Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.275335538Z" level=info msg="runSandbox: deleting pod ID 8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81 from idIndex" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.275370757Z" level=info msg="runSandbox: removing pod sandbox 8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.275412063Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.275426406Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81-userdata-shm.mount: Deactivated successfully. Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.282302023Z" level=info msg="runSandbox: removing pod sandbox from storage: 8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.283809878Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:32.283841246Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d64ca98a-3c4c-49d3-8bb9-995bd10915af name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:32.284041 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:42:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:32.284090 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:42:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:32.284114 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:42:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:32.284173 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8a78083ac77aa2d9d7499704973ac682c33333c9aa8b911cc016635534515a81): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:42:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:34.872769 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:42:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:34.872830 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:42:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:35.217155 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:35.217530 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:35.217825 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:35.217862 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:42:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:44.872988 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:42:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:44.873058 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:42:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:44.873086 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:42:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:44.873637 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:42:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:44.873794 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786" gracePeriod=30 Feb 23 19:42:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:44.874042968Z" level=info msg="Stopping container: b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786 (timeout: 30s)" id=62168f7e-532f-4b90-906f-46e8d490e359 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:42:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:45.237040055Z" level=info msg="NetworkStart: stopping network for sandbox 9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:45.237178857Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/1bad56ef-5ae0-4708-ac93-d4c29e1bd886 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:42:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:45.237218630Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:42:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:45.237228565Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:42:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:45.237238168Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:42:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:47.216331 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.216744799Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.216815478Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.222474211Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/46eaea30-821b-43dd-b226-c3730bb01820 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.222511529Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.235896436Z" level=info msg="NetworkStart: stopping network for sandbox aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.235993042Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7201f37c-d2b0-456c-93b9-93c9072cdffa Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.236035757Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.236048792Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:42:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:47.236060277Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:42:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:48.636056745Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=62168f7e-532f-4b90-906f-46e8d490e359 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:42:48 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-827c60d98e3304509fb28dcbcf4e951e2dd8331d8f8f1713843b6f26a2b2ad61-merged.mount: Deactivated successfully. Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.418060974Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=62168f7e-532f-4b90-906f-46e8d490e359 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.420487083Z" level=info msg="Stopped container b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=62168f7e-532f-4b90-906f-46e8d490e359 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.421193884Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=0b6f05ed-f00a-4a14-ba12-18bc33708227 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.421398297Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=0b6f05ed-f00a-4a14-ba12-18bc33708227 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.422006307Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=ff605bf6-826d-4f3d-be02-1d427783b8d0 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.422211102Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=ff605bf6-826d-4f3d-be02-1d427783b8d0 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.422877691Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=9fe0f931-b0f9-4b44-9b07-fa5d7d887370 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.422970451Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:42:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d.scope. Feb 23 19:42:52 ip-10-0-136-68 systemd[1]: Started libcontainer container b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d. Feb 23 19:42:52 ip-10-0-136-68 conmon[18304]: conmon b1c11a9f34ac1de41d59 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:42:52 ip-10-0-136-68 systemd[1]: crio-conmon-b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d.scope: Deactivated successfully. Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.557111560Z" level=info msg="Created container b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=9fe0f931-b0f9-4b44-9b07-fa5d7d887370 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.557589825Z" level=info msg="Starting container: b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" id=cc15a08a-8463-4aa0-9ce2-29a5268fa872 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:42:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:52.564343456Z" level=info msg="Started container" PID=18316 containerID=b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=cc15a08a-8463-4aa0-9ce2-29a5268fa872 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:42:52 ip-10-0-136-68 systemd[1]: crio-b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d.scope: Deactivated successfully. Feb 23 19:42:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:53.260837350Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=ed6048e0-0b8c-4a37-9001-09bf75632f51 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:42:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:56.292277 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:56.292592 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:56.292803 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:42:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:42:56.292831 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:42:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:57.011198085Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=62716e47-c3f0-45d0-a51c-cdc986d9669a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:42:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:57.012157 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786" exitCode=-1 Feb 23 19:42:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:57.012195 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786} Feb 23 19:42:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:42:57.012234 2199 scope.go:115] "RemoveContainer" containerID="990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" Feb 23 19:42:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:57.235358544Z" level=info msg="NetworkStart: stopping network for sandbox 61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:42:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:57.235485200Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/eba30d5e-bdbf-4c72-9ab0-b1a3ae4ae1c0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:42:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:57.235513297Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:42:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:57.235520897Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:42:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:42:57.235528950Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:43:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:00.761406075Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=755b8d1e-387c-4920-9d4c-ab4fcd4a87ee name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:43:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:01.776125135Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=048323ce-5c99-4f46-aa7c-466de07f297f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:43:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:04.510971169Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=4147cdaa-8afe-4ec8-8999-b247f928dc24 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:43:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:04.511474728Z" level=info msg="Removing container: 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113" id=30e235c5-7b3c-48b2-bcb9-9c3efc08ed49 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:43:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:05.514266310Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=3d5e211a-485e-41db-8f00-7fdd79ebbcf6 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:43:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:05.515313 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d} Feb 23 19:43:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:08.273011701Z" level=warning msg="Failed to find container exit file for 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: timed out waiting for the condition" id=30e235c5-7b3c-48b2-bcb9-9c3efc08ed49 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:43:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:08.297210678Z" level=info msg="Removed container 990f4a14943bf48e2bb5cc6a85e76c0a8463838917b5f6b6f240f17e26ba9113: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=30e235c5-7b3c-48b2-bcb9-9c3efc08ed49 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:43:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:12.271084438Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=c5bdd2f5-9d0f-4a01-8410-977a54676219 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.247225230Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.247296141Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 systemd[1]: run-utsns-922e2685\x2dc2bf\x2d4d38\x2daaef\x2d1c5f31a61bc2.mount: Deactivated successfully. Feb 23 19:43:14 ip-10-0-136-68 systemd[1]: run-ipcns-922e2685\x2dc2bf\x2d4d38\x2daaef\x2d1c5f31a61bc2.mount: Deactivated successfully. Feb 23 19:43:14 ip-10-0-136-68 systemd[1]: run-netns-922e2685\x2dc2bf\x2d4d38\x2daaef\x2d1c5f31a61bc2.mount: Deactivated successfully. Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.277335244Z" level=info msg="runSandbox: deleting pod ID 27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d from idIndex" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.277374560Z" level=info msg="runSandbox: removing pod sandbox 27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.277416897Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.277438361Z" level=info msg="runSandbox: unmounting shmPath for sandbox 27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d-userdata-shm.mount: Deactivated successfully. Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.282313041Z" level=info msg="runSandbox: removing pod sandbox from storage: 27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.284225454Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:14.284333746Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=8173fa01-97bc-451a-8f5b-899453b1328b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:14.284497 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:43:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:14.284544 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:43:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:14.284571 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:43:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:14.284637 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(27ff4ae72336389fcd0c093bada12cb1476542197e53fea196deed835d66c43d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:43:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:14.872556 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:43:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:14.872617 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:43:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:24.873023 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:43:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:24.873086 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:43:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:26.216558 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:26.216997458Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:26.217077600Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:26.222823666Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4c6e758b-ad32-4ad8-91e3-2af6c59558e5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:43:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:26.222848489Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:26.291715 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:26.291962 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:26.292222 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:26.292282 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.247323016Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.247367008Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 systemd[1]: run-utsns-1bad56ef\x2d5ae0\x2d4708\x2dac93\x2dd4c29e1bd886.mount: Deactivated successfully. Feb 23 19:43:30 ip-10-0-136-68 systemd[1]: run-ipcns-1bad56ef\x2d5ae0\x2d4708\x2dac93\x2dd4c29e1bd886.mount: Deactivated successfully. Feb 23 19:43:30 ip-10-0-136-68 systemd[1]: run-netns-1bad56ef\x2d5ae0\x2d4708\x2dac93\x2dd4c29e1bd886.mount: Deactivated successfully. Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.282337339Z" level=info msg="runSandbox: deleting pod ID 9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94 from idIndex" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.282377979Z" level=info msg="runSandbox: removing pod sandbox 9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.282429780Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.282452611Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94-userdata-shm.mount: Deactivated successfully. Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.287305319Z" level=info msg="runSandbox: removing pod sandbox from storage: 9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.289701816Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:30.289748494Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=875ca500-dd5b-4c6e-8e84-9caae66fa680 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:30.291452 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:43:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:30.291521 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:43:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:30.291555 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:43:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:30.291636 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(9d553cde8e45deaaed5784e8d709745ffd371b047053d5d7e89e0262f48f9d94): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.234194582Z" level=info msg="NetworkStart: stopping network for sandbox cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.234329818Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/46eaea30-821b-43dd-b226-c3730bb01820 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.234360881Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.234369186Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.234376312Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.244336502Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.244377133Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 systemd[1]: run-utsns-7201f37c\x2dd2b0\x2d456c\x2d93b9\x2d93c9072cdffa.mount: Deactivated successfully. Feb 23 19:43:32 ip-10-0-136-68 systemd[1]: run-ipcns-7201f37c\x2dd2b0\x2d456c\x2d93b9\x2d93c9072cdffa.mount: Deactivated successfully. Feb 23 19:43:32 ip-10-0-136-68 systemd[1]: run-netns-7201f37c\x2dd2b0\x2d456c\x2d93b9\x2d93c9072cdffa.mount: Deactivated successfully. Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.261318299Z" level=info msg="runSandbox: deleting pod ID aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987 from idIndex" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.261356693Z" level=info msg="runSandbox: removing pod sandbox aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.261396478Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.261413874Z" level=info msg="runSandbox: unmounting shmPath for sandbox aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987-userdata-shm.mount: Deactivated successfully. Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.281321867Z" level=info msg="runSandbox: removing pod sandbox from storage: aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.282904057Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:32.282933357Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=bb48773b-22c9-47d6-b338-34a542379025 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:32.283119 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:43:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:32.283185 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:43:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:32.283222 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:43:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:32.283320 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(aba50f44cc22a7919be834f00986741e935288146f231d3e67f6535853376987): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:43:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:34.872633 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:43:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:34.872692 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.245640828Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.245692149Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 systemd[1]: run-utsns-eba30d5e\x2dbdbf\x2d4c72\x2d9ab0\x2db1a3ae4ae1c0.mount: Deactivated successfully. Feb 23 19:43:42 ip-10-0-136-68 systemd[1]: run-ipcns-eba30d5e\x2dbdbf\x2d4c72\x2d9ab0\x2db1a3ae4ae1c0.mount: Deactivated successfully. Feb 23 19:43:42 ip-10-0-136-68 systemd[1]: run-netns-eba30d5e\x2dbdbf\x2d4c72\x2d9ab0\x2db1a3ae4ae1c0.mount: Deactivated successfully. Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.276335936Z" level=info msg="runSandbox: deleting pod ID 61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f from idIndex" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.276382809Z" level=info msg="runSandbox: removing pod sandbox 61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.276439110Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.276461833Z" level=info msg="runSandbox: unmounting shmPath for sandbox 61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f-userdata-shm.mount: Deactivated successfully. Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.282302101Z" level=info msg="runSandbox: removing pod sandbox from storage: 61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.283917482Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:42.283951654Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=4e11d743-f7ea-49ea-9630-487e2a238dbb name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:42.284160 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:43:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:42.284215 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:43:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:42.284270 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:43:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:42.284341 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(61b0ee6bfed7802f2457fff1c2ba293aa9991035408206e5295f5387d9660a2f): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:43:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:44.872511 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:43:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:44.872569 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:43:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:45.216688 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:43:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:45.217065945Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:45.217125261Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:43:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:45.222792902Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/e42478bb-b7ab-407c-9866-c65d298a40b4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:43:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:45.222827679Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:43:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:46.216794 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:43:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:46.217236304Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:46.217338549Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:43:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:46.222915285Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7b730609-ce6b-4907-8a53-a3cf634c9ba6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:43:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:46.222942769Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:43:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:51.216978 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:51.217347 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:51.217597 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:51.217654 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:43:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:53.217023 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:43:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:53.217425658Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:43:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:53.217491057Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:43:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:53.223182220Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6a3463eb-a47b-424a-8c9d-73aa2112035a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:43:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:53.223217982Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:43:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:54.872053 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:43:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:54.872124 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:43:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:54.872158 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:43:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:54.872765 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:43:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:43:54.872953 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" gracePeriod=30 Feb 23 19:43:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:54.873183088Z" level=info msg="Stopping container: b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d (timeout: 30s)" id=511e70f7-b61c-49d2-829a-fedfe4e58fa8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:43:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:56.292236 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:56.292548 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:56.292841 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:43:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:43:56.292873 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:43:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:43:58.635206256Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=511e70f7-b61c-49d2-829a-fedfe4e58fa8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:43:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-f66c9ff96da86203a2afbcfccf65173b0a38a441a76ff08cfd2c2debb9d8e6e7-merged.mount: Deactivated successfully. Feb 23 19:44:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:02.417930978Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=511e70f7-b61c-49d2-829a-fedfe4e58fa8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:44:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:02.419458438Z" level=info msg="Stopped container b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=511e70f7-b61c-49d2-829a-fedfe4e58fa8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:44:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:02.419942 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:44:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:03.080410551Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=da4da6bf-cfdc-47b6-b485-6ae86052261a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:44:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:06.841933729Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=0b2201c0-97e8-4097-b4d4-9d6638de895f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:44:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:06.842925 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" exitCode=-1 Feb 23 19:44:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:06.842970 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d} Feb 23 19:44:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:06.843000 2199 scope.go:115] "RemoveContainer" containerID="b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786" Feb 23 19:44:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:07.845167 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:44:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:07.845603 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:44:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:10.604118668Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=61ad866a-5f5d-4bab-abf9-6ec36986324e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:44:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:11.233764875Z" level=info msg="NetworkStart: stopping network for sandbox 059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:11.233889122Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4c6e758b-ad32-4ad8-91e3-2af6c59558e5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:44:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:11.233919606Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:44:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:11.233930468Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:44:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:11.233940067Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:44:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:14.364912532Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=5427f1f3-8100-47f3-a2a3-a88c21bffd13 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:44:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:14.365362610Z" level=info msg="Removing container: b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786" id=4d1d685a-d929-4560-a12c-74dc629da646 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.243188125Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.243236378Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 systemd[1]: run-utsns-46eaea30\x2d821b\x2d43dd\x2db226\x2dc3730bb01820.mount: Deactivated successfully. Feb 23 19:44:17 ip-10-0-136-68 systemd[1]: run-ipcns-46eaea30\x2d821b\x2d43dd\x2db226\x2dc3730bb01820.mount: Deactivated successfully. Feb 23 19:44:17 ip-10-0-136-68 systemd[1]: run-netns-46eaea30\x2d821b\x2d43dd\x2db226\x2dc3730bb01820.mount: Deactivated successfully. Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.262327686Z" level=info msg="runSandbox: deleting pod ID cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820 from idIndex" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.262364726Z" level=info msg="runSandbox: removing pod sandbox cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.262415115Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.262437839Z" level=info msg="runSandbox: unmounting shmPath for sandbox cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820-userdata-shm.mount: Deactivated successfully. Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.267315907Z" level=info msg="runSandbox: removing pod sandbox from storage: cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.268748211Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:17.268775297Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=815a113d-9c62-4a5f-8eb3-79fdb6fa94e2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:17.268979 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:44:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:17.269173 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:44:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:17.269213 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:44:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:17.269351 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(cc58cba3d6a7f1a874d52f6ebe88370c3ca436b05c29b00d5d7f5ab84ecd2820): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:44:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:18.114064109Z" level=warning msg="Failed to find container exit file for b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: timed out waiting for the condition" id=4d1d685a-d929-4560-a12c-74dc629da646 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:44:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:18.126715069Z" level=info msg="Removed container b48df4a3b373b068ffc839f53126b2c8fcae703e08f0a076d517ad7cb7933786: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=4d1d685a-d929-4560-a12c-74dc629da646 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:44:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:18.216810 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:44:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:18.217451 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:44:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:22.623986142Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=977e3700-d3aa-4429-9036-8407dd614b7d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:44:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:26.292513 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:44:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:26.292836 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:44:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:26.293094 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:44:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:26.293125 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:44:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:30.217566 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:44:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:30.217750 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.217993435Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.218063500Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:44:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:30.218414 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.224193481Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f39a890c-a523-4646-bd55-dc0de4ba923d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.224228373Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.235182223Z" level=info msg="NetworkStart: stopping network for sandbox 29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.235296826Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/e42478bb-b7ab-407c-9866-c65d298a40b4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.235321878Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.235329593Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:44:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:30.235336419Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:44:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:31.234301317Z" level=info msg="NetworkStart: stopping network for sandbox dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:31.234416687Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/7b730609-ce6b-4907-8a53-a3cf634c9ba6 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:44:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:31.234452823Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:44:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:31.234460832Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:44:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:31.234467724Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:44:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:38.235057812Z" level=info msg="NetworkStart: stopping network for sandbox e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:38.235176721Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6a3463eb-a47b-424a-8c9d-73aa2112035a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:44:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:38.235218842Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:44:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:38.235231310Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:44:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:38.235240837Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:44:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:43.216957 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:44:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:43.217396 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:44:56.216742 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.217322 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.244099840Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.244139822Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 systemd[1]: run-utsns-4c6e758b\x2dad32\x2d4ad8\x2d91e3\x2d2af6c59558e5.mount: Deactivated successfully. Feb 23 19:44:56 ip-10-0-136-68 systemd[1]: run-ipcns-4c6e758b\x2dad32\x2d4ad8\x2d91e3\x2d2af6c59558e5.mount: Deactivated successfully. Feb 23 19:44:56 ip-10-0-136-68 systemd[1]: run-netns-4c6e758b\x2dad32\x2d4ad8\x2d91e3\x2d2af6c59558e5.mount: Deactivated successfully. Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.271331440Z" level=info msg="runSandbox: deleting pod ID 059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1 from idIndex" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.271368277Z" level=info msg="runSandbox: removing pod sandbox 059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.271411816Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.271436101Z" level=info msg="runSandbox: unmounting shmPath for sandbox 059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1-userdata-shm.mount: Deactivated successfully. Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.276294550Z" level=info msg="runSandbox: removing pod sandbox from storage: 059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.277767478Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:44:56.277800965Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=8d438e29-3c69-43c9-910c-98bc3622b9c2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.277961 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.278006 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.278029 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.278084 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(059d8e148328568e6636b75dc54bedf63b45d06420a9d4a6de163d82083d1ef1): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.292413 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.292610 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.292830 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:44:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:44:56.292859 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:45:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:08.216866 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:45:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:08.217452 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:45:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:09.216372 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:09.216797370Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:09.216871860Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:09.222084906Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5d4ce020-2f79-4dcf-9efc-2b27f6c04ff9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:45:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:09.222113262Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.236504812Z" level=info msg="NetworkStart: stopping network for sandbox 92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.236624015Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f39a890c-a523-4646-bd55-dc0de4ba923d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.236652849Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.236664338Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.236674369Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.244456953Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.244493987Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 systemd[1]: run-utsns-e42478bb\x2db7ab\x2d407c\x2d9866\x2dc65d298a40b4.mount: Deactivated successfully. Feb 23 19:45:15 ip-10-0-136-68 systemd[1]: run-ipcns-e42478bb\x2db7ab\x2d407c\x2d9866\x2dc65d298a40b4.mount: Deactivated successfully. Feb 23 19:45:15 ip-10-0-136-68 systemd[1]: run-netns-e42478bb\x2db7ab\x2d407c\x2d9866\x2dc65d298a40b4.mount: Deactivated successfully. Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.264316043Z" level=info msg="runSandbox: deleting pod ID 29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49 from idIndex" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.264357228Z" level=info msg="runSandbox: removing pod sandbox 29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.264386644Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.264406928Z" level=info msg="runSandbox: unmounting shmPath for sandbox 29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49-userdata-shm.mount: Deactivated successfully. Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.270307572Z" level=info msg="runSandbox: removing pod sandbox from storage: 29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.271848801Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:15.271881019Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=0982bac5-7871-44fc-b98d-2fe573f6b79f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:15.272093 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:45:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:15.272157 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:45:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:15.272182 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:45:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:15.272239 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(29e54f0312d507de3deeba25efeba94f910b87759154bd7d8366459a3b673b49): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:45:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:16.217561 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:16.217860 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:16.218196 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:16.218277 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.244337660Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.244378616Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 systemd[1]: run-utsns-7b730609\x2dce6b\x2d4907\x2d8a53\x2da3cf634c9ba6.mount: Deactivated successfully. Feb 23 19:45:16 ip-10-0-136-68 systemd[1]: run-ipcns-7b730609\x2dce6b\x2d4907\x2d8a53\x2da3cf634c9ba6.mount: Deactivated successfully. Feb 23 19:45:16 ip-10-0-136-68 systemd[1]: run-netns-7b730609\x2dce6b\x2d4907\x2d8a53\x2da3cf634c9ba6.mount: Deactivated successfully. Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.274345773Z" level=info msg="runSandbox: deleting pod ID dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3 from idIndex" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.274381919Z" level=info msg="runSandbox: removing pod sandbox dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.274415498Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.274441563Z" level=info msg="runSandbox: unmounting shmPath for sandbox dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3-userdata-shm.mount: Deactivated successfully. Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.280294213Z" level=info msg="runSandbox: removing pod sandbox from storage: dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.281709279Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:16.281736191Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=faffdaae-2b85-46a9-aafb-d9d8faa12cfe name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:16.281897 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:45:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:16.281946 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:45:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:16.281970 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:45:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:16.282026 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(dbb7128364be3936a43735cdc86bfb384405b3dd2b41a62afb0be2ee1d9d2af3): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:45:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:19.217158 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:45:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:19.217733 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:20.239734528Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=1e2fe50e-a5be-4196-8ee6-a63474f584ca name=/runtime.v1.ImageService/ImageStatus Feb 23 19:45:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:20.239902692Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=1e2fe50e-a5be-4196-8ee6-a63474f584ca name=/runtime.v1.ImageService/ImageStatus Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.245507398Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.245567082Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 systemd[1]: run-utsns-6a3463eb\x2da47b\x2d424a\x2d8c9d\x2d73aa2112035a.mount: Deactivated successfully. Feb 23 19:45:23 ip-10-0-136-68 systemd[1]: run-ipcns-6a3463eb\x2da47b\x2d424a\x2d8c9d\x2d73aa2112035a.mount: Deactivated successfully. Feb 23 19:45:23 ip-10-0-136-68 systemd[1]: run-netns-6a3463eb\x2da47b\x2d424a\x2d8c9d\x2d73aa2112035a.mount: Deactivated successfully. Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.271315291Z" level=info msg="runSandbox: deleting pod ID e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed from idIndex" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.271350816Z" level=info msg="runSandbox: removing pod sandbox e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.271383257Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.271408701Z" level=info msg="runSandbox: unmounting shmPath for sandbox e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed-userdata-shm.mount: Deactivated successfully. Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.278297689Z" level=info msg="runSandbox: removing pod sandbox from storage: e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.279757931Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:23.279796133Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=e1421eba-042d-41b1-b3cc-d91f98c0e8e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:23.279992 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:45:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:23.280075 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:45:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:23.280116 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:45:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:23.280208 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e84f53b33e2ac8d105341dfde768e3aa5aa288505ce425871b8213a9c76a71ed): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:45:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:26.291795 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:26.292067 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:26.292322 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:26.292352 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:45:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:29.216695 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:45:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:29.216695 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:45:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:29.217130294Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:29.217196906Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:45:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:29.217148679Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:29.217309735Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:45:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:29.224098810Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/8de797ff-9eee-47d2-9d30-703bd589173a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:45:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:29.224123901Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:45:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:29.224642062Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/5ac498d4-3f92-4af9-809d-18ee09bd1a91 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:45:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:29.224664296Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:45:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:30.217440 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:45:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:30.217848 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:45:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:37.217025 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:45:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:37.217425102Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:37.217491881Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:45:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:37.222978046Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6f199cbe-7faa-42be-aeae-43e84e3fc343 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:45:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:37.223015263Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:45:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:44.217453 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:45:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:44.217918 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:45:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:54.234451833Z" level=info msg="NetworkStart: stopping network for sandbox 39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:45:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:54.234577945Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5d4ce020-2f79-4dcf-9efc-2b27f6c04ff9 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:45:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:54.234604928Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:45:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:54.234612458Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:45:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:45:54.234618735Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:45:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:45:56.217161 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:56.217730 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:56.292069 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:56.292305 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:56.292484 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:45:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:45:56.292506 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.245920080Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.245968168Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 systemd[1]: run-utsns-f39a890c\x2da523\x2d4646\x2dbd55\x2ddc0de4ba923d.mount: Deactivated successfully. Feb 23 19:46:00 ip-10-0-136-68 systemd[1]: run-ipcns-f39a890c\x2da523\x2d4646\x2dbd55\x2ddc0de4ba923d.mount: Deactivated successfully. Feb 23 19:46:00 ip-10-0-136-68 systemd[1]: run-netns-f39a890c\x2da523\x2d4646\x2dbd55\x2ddc0de4ba923d.mount: Deactivated successfully. Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.273327114Z" level=info msg="runSandbox: deleting pod ID 92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78 from idIndex" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.273366205Z" level=info msg="runSandbox: removing pod sandbox 92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.273398355Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.273422919Z" level=info msg="runSandbox: unmounting shmPath for sandbox 92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78-userdata-shm.mount: Deactivated successfully. Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.280304606Z" level=info msg="runSandbox: removing pod sandbox from storage: 92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.281818395Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:00.281847047Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=84738cc7-93cf-4a7d-9399-84113724c9bf name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:00.282020 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:46:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:00.282073 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:46:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:00.282100 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:46:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:00.282154 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(92243121cd7978f4a1b32b8d6576a6e48fa76e1ff059efe32c280cbd5f910f78): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:46:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:46:08.216728 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:46:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:08.217108 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.236687730Z" level=info msg="NetworkStart: stopping network for sandbox 63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.236791935Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/5ac498d4-3f92-4af9-809d-18ee09bd1a91 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.236825001Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.236833441Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.236839917Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.238758938Z" level=info msg="NetworkStart: stopping network for sandbox 59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.238861011Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/8de797ff-9eee-47d2-9d30-703bd589173a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.238899170Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.238910672Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:46:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:14.238920887Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:46:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:46:15.216613 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:46:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:15.217034001Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:15.217097190Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:46:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:15.222218968Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/693a513f-dffa-4c6f-8db1-2880787d9cd2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:46:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:15.222277006Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:46:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:46:20.217103 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:46:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:20.217586 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:46:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:22.234388323Z" level=info msg="NetworkStart: stopping network for sandbox 64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:22.234499989Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6f199cbe-7faa-42be-aeae-43e84e3fc343 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:46:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:22.234531432Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:46:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:22.234542050Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:46:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:22.234549797Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:46:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:26.292540 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:26.292798 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:26.293024 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:26.293052 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:46:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:29.217528 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:29.217882 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:29.218103 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:29.218142 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:46:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:46:32.216475 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:46:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:32.217050 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.243604353Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.243653886Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 systemd[1]: run-utsns-5d4ce020\x2d2f79\x2d4dcf\x2d9efc\x2d2b27f6c04ff9.mount: Deactivated successfully. Feb 23 19:46:39 ip-10-0-136-68 systemd[1]: run-ipcns-5d4ce020\x2d2f79\x2d4dcf\x2d9efc\x2d2b27f6c04ff9.mount: Deactivated successfully. Feb 23 19:46:39 ip-10-0-136-68 systemd[1]: run-netns-5d4ce020\x2d2f79\x2d4dcf\x2d9efc\x2d2b27f6c04ff9.mount: Deactivated successfully. Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.285337616Z" level=info msg="runSandbox: deleting pod ID 39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922 from idIndex" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.285370341Z" level=info msg="runSandbox: removing pod sandbox 39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.285421696Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.285436222Z" level=info msg="runSandbox: unmounting shmPath for sandbox 39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922-userdata-shm.mount: Deactivated successfully. Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.290308515Z" level=info msg="runSandbox: removing pod sandbox from storage: 39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.291818207Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:39.291844633Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1692aff2-f5b2-4625-b73d-d1ea8858964c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:39.292057 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:46:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:39.292129 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:46:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:39.292166 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:46:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:39.292308 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(39efd875d68c2f55a63ea082d7c149d544cd993e8bd7e2fe3ccfd313bda97922): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:46:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:46:44.216769 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:46:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:44.217386 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:46:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:46:51.216925 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:51.217264737Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:51.217319493Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:51.222369155Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1aa3a877-69c4-43a2-8f18-ff091a94707d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:46:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:51.222395075Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:46:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:46:55.217236 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:46:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:55.217666 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:46:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:56.292716 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:56.293037 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:56.293326 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:46:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:56.293365 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.246476021Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.246532414Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.248949211Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.249001042Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 systemd[1]: run-utsns-5ac498d4\x2d3f92\x2d4af9\x2d809d\x2d18ee09bd1a91.mount: Deactivated successfully. Feb 23 19:46:59 ip-10-0-136-68 systemd[1]: run-utsns-8de797ff\x2d9eee\x2d47d2\x2d9d30\x2d703bd589173a.mount: Deactivated successfully. Feb 23 19:46:59 ip-10-0-136-68 systemd[1]: run-ipcns-8de797ff\x2d9eee\x2d47d2\x2d9d30\x2d703bd589173a.mount: Deactivated successfully. Feb 23 19:46:59 ip-10-0-136-68 systemd[1]: run-ipcns-5ac498d4\x2d3f92\x2d4af9\x2d809d\x2d18ee09bd1a91.mount: Deactivated successfully. Feb 23 19:46:59 ip-10-0-136-68 systemd[1]: run-netns-8de797ff\x2d9eee\x2d47d2\x2d9d30\x2d703bd589173a.mount: Deactivated successfully. Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.266318883Z" level=info msg="runSandbox: deleting pod ID 63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9 from idIndex" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.266358757Z" level=info msg="runSandbox: removing pod sandbox 63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.266388400Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.266401206Z" level=info msg="runSandbox: unmounting shmPath for sandbox 63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.266324933Z" level=info msg="runSandbox: deleting pod ID 59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436 from idIndex" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.266453231Z" level=info msg="runSandbox: removing pod sandbox 59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.266471315Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.266481798Z" level=info msg="runSandbox: unmounting shmPath for sandbox 59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.271324557Z" level=info msg="runSandbox: removing pod sandbox from storage: 63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.271327655Z" level=info msg="runSandbox: removing pod sandbox from storage: 59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.272851205Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.272915839Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=52333e9e-f9cb-4369-8525-01274bc66504 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:59.273217 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:46:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:59.273319 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:46:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:59.273363 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:46:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:59.273428 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.274418492Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:46:59.274451398Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=b25228bd-1c06-42c1-a24e-2383d6394604 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:46:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:59.274604 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:46:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:59.274643 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:46:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:59.274664 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:46:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:46:59.274711 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:47:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:00.236456103Z" level=info msg="NetworkStart: stopping network for sandbox 13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:00.236577101Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/693a513f-dffa-4c6f-8db1-2880787d9cd2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:47:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:00.236605757Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:47:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:00.236612880Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:47:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:00.236620021Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:47:00 ip-10-0-136-68 systemd[1]: run-netns-5ac498d4\x2d3f92\x2d4af9\x2d809d\x2d18ee09bd1a91.mount: Deactivated successfully. Feb 23 19:47:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-63b50d584d69dd6e6e93fcf2f22651dc7cc970aada0ee27be7431f0af25c70f9-userdata-shm.mount: Deactivated successfully. Feb 23 19:47:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-59742884a499911dd73eabe44d2a046701866d8c16e0ca844f99d3cd66f67436-userdata-shm.mount: Deactivated successfully. Feb 23 19:47:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:07.217362 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:47:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:07.217937 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.243507026Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.243558989Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 systemd[1]: run-utsns-6f199cbe\x2d7faa\x2d42be\x2daeae\x2d43e84e3fc343.mount: Deactivated successfully. Feb 23 19:47:07 ip-10-0-136-68 systemd[1]: run-ipcns-6f199cbe\x2d7faa\x2d42be\x2daeae\x2d43e84e3fc343.mount: Deactivated successfully. Feb 23 19:47:07 ip-10-0-136-68 systemd[1]: run-netns-6f199cbe\x2d7faa\x2d42be\x2daeae\x2d43e84e3fc343.mount: Deactivated successfully. Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.267321079Z" level=info msg="runSandbox: deleting pod ID 64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c from idIndex" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.267360836Z" level=info msg="runSandbox: removing pod sandbox 64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.267404385Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.267423611Z" level=info msg="runSandbox: unmounting shmPath for sandbox 64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c-userdata-shm.mount: Deactivated successfully. Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.273305692Z" level=info msg="runSandbox: removing pod sandbox from storage: 64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.274926830Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:07.274958327Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=94485373-3149-4b00-9e35-dceecef6dab5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:07.275188 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:47:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:07.275302 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:47:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:07.275330 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:47:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:07.275394 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(64f43c4bce6c76a39c85c6d88290507a69075ae9f8945a1c3bd06c0785993e5c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:47:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:11.217141 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:47:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:11.217611064Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:11.217669541Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:47:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:11.223512882Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/fb74547e-80b8-4570-9460-344bc7d34eed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:47:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:11.223546762Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:47:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:12.217072 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:47:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:12.217515318Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:12.217590954Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:47:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:12.222965270Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f0a8658f-de36-4efa-8ee2-2562ff463350 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:47:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:12.222989001Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:47:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:19.217213 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:47:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:19.217629381Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:19.217685781Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:47:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:19.223046698Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7a60d963-3ae9-45d9-8ae7-beaf4fd05d49 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:47:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:19.223073111Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:47:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:20.217767 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:47:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:20.218338 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:47:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:26.292382 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:26.292618 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:26.292877 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:26.292921 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:47:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:31.217381 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:47:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:31.217958 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:36.233792502Z" level=info msg="NetworkStart: stopping network for sandbox 84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:36.233906283Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1aa3a877-69c4-43a2-8f18-ff091a94707d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:36.233941690Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:36.233949394Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:47:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:36.233957997Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:47:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:45.216728 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:47:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:45.217086 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.246754319Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.246818674Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 systemd[1]: run-utsns-693a513f\x2ddffa\x2d4c6f\x2d8db1\x2d2880787d9cd2.mount: Deactivated successfully. Feb 23 19:47:45 ip-10-0-136-68 systemd[1]: run-ipcns-693a513f\x2ddffa\x2d4c6f\x2d8db1\x2d2880787d9cd2.mount: Deactivated successfully. Feb 23 19:47:45 ip-10-0-136-68 systemd[1]: run-netns-693a513f\x2ddffa\x2d4c6f\x2d8db1\x2d2880787d9cd2.mount: Deactivated successfully. Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.264327046Z" level=info msg="runSandbox: deleting pod ID 13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e from idIndex" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.264367023Z" level=info msg="runSandbox: removing pod sandbox 13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.264396215Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.264410347Z" level=info msg="runSandbox: unmounting shmPath for sandbox 13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e-userdata-shm.mount: Deactivated successfully. Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.271319466Z" level=info msg="runSandbox: removing pod sandbox from storage: 13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.273090061Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:45.273121816Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=5033acdc-4c43-4050-8a03-3694ab833100 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:45.273410 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:47:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:45.273465 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:47:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:45.273494 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:47:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:45.273566 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(13d32b7b195c2718ffae3397ac5356c69786981e78b987f5f2704f231f86d55e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:47:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:54.217325 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:54.217647 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:54.217850 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:54.217898 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:47:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:56.216866 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.217287025Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.217354934Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.226394437Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/efa96f1f-09ed-4291-ac05-dfabdb8b3570 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.226431379Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.234819911Z" level=info msg="NetworkStart: stopping network for sandbox a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.234914627Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/fb74547e-80b8-4570-9460-344bc7d34eed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.234954265Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.234965338Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:47:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:56.234976002Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:47:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:56.291859 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:56.292143 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:56.292447 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:47:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:56.292485 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:47:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:47:57.216664 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:47:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:47:57.217063 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:47:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:57.235048218Z" level=info msg="NetworkStart: stopping network for sandbox e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:47:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:57.235170351Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f0a8658f-de36-4efa-8ee2-2562ff463350 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:47:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:57.235202467Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:47:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:57.235212589Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:47:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:47:57.235220129Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:48:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:04.234795709Z" level=info msg="NetworkStart: stopping network for sandbox f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:04.234922877Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7a60d963-3ae9-45d9-8ae7-beaf4fd05d49 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:48:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:04.234965334Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:48:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:04.234977425Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:48:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:04.234987752Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:48:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:48:11.217107 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:48:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:11.217530 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.243173267Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.243228542Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 systemd[1]: run-utsns-1aa3a877\x2d69c4\x2d43a2\x2d8f18\x2dff091a94707d.mount: Deactivated successfully. Feb 23 19:48:21 ip-10-0-136-68 systemd[1]: run-ipcns-1aa3a877\x2d69c4\x2d43a2\x2d8f18\x2dff091a94707d.mount: Deactivated successfully. Feb 23 19:48:21 ip-10-0-136-68 systemd[1]: run-netns-1aa3a877\x2d69c4\x2d43a2\x2d8f18\x2dff091a94707d.mount: Deactivated successfully. Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.270337681Z" level=info msg="runSandbox: deleting pod ID 84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e from idIndex" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.270373739Z" level=info msg="runSandbox: removing pod sandbox 84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.270417264Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.270433011Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e-userdata-shm.mount: Deactivated successfully. Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.275300164Z" level=info msg="runSandbox: removing pod sandbox from storage: 84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.276787732Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:21.276821969Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=824f0c5b-cd18-4546-8659-05b03a295bdc name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:21.277025 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:48:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:21.277083 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:48:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:21.277109 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:48:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:21.277162 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84e03d831ceaf6f346e21ac343272e3f8e159187396a59c73adc2defe020f12e): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:48:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:48:24.217050 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:48:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:24.217681 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:48:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:26.291993 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:48:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:26.292309 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:48:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:26.292587 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:48:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:26.292613 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:48:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:48:32.217081 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:48:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:32.217522490Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:32.217593185Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:48:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:32.223372805Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5bedf673-a87f-4970-abea-74775e1b15ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:48:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:32.223407450Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:48:35 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:48:35.216807 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:48:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:35.217213 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.240187338Z" level=info msg="NetworkStart: stopping network for sandbox ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.240333191Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/efa96f1f-09ed-4291-ac05-dfabdb8b3570 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.240367916Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.240375841Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.240383094Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.244362174Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.244404453Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 systemd[1]: run-utsns-fb74547e\x2d80b8\x2d4570\x2d9460\x2d344bc7d34eed.mount: Deactivated successfully. Feb 23 19:48:41 ip-10-0-136-68 systemd[1]: run-ipcns-fb74547e\x2d80b8\x2d4570\x2d9460\x2d344bc7d34eed.mount: Deactivated successfully. Feb 23 19:48:41 ip-10-0-136-68 systemd[1]: run-netns-fb74547e\x2d80b8\x2d4570\x2d9460\x2d344bc7d34eed.mount: Deactivated successfully. Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.271329500Z" level=info msg="runSandbox: deleting pod ID a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91 from idIndex" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.271372487Z" level=info msg="runSandbox: removing pod sandbox a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.271419704Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.271440149Z" level=info msg="runSandbox: unmounting shmPath for sandbox a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91-userdata-shm.mount: Deactivated successfully. Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.277304197Z" level=info msg="runSandbox: removing pod sandbox from storage: a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.278784232Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:41.278814529Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=9339c4f4-1032-4484-89e2-ae6d3e85fb16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:41.279025 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:48:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:41.279079 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:48:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:41.279105 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:48:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:41.279164 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(a6662b37fdd298cdfa2660c24c8dfb0700581c753e6387eaea72897158319a91): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.244540314Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.244592982Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 systemd[1]: run-utsns-f0a8658f\x2dde36\x2d4efa\x2d8ee2\x2d2562ff463350.mount: Deactivated successfully. Feb 23 19:48:42 ip-10-0-136-68 systemd[1]: run-ipcns-f0a8658f\x2dde36\x2d4efa\x2d8ee2\x2d2562ff463350.mount: Deactivated successfully. Feb 23 19:48:42 ip-10-0-136-68 systemd[1]: run-netns-f0a8658f\x2dde36\x2d4efa\x2d8ee2\x2d2562ff463350.mount: Deactivated successfully. Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.277324185Z" level=info msg="runSandbox: deleting pod ID e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382 from idIndex" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.277356923Z" level=info msg="runSandbox: removing pod sandbox e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.277385615Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.277404053Z" level=info msg="runSandbox: unmounting shmPath for sandbox e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382-userdata-shm.mount: Deactivated successfully. Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.289298286Z" level=info msg="runSandbox: removing pod sandbox from storage: e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.290808513Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:42.290836269Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=6e99eb6d-969c-4445-93a5-2a9491c4462c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:42.291014 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:48:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:42.291066 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:48:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:42.291095 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:48:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:42.291147 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e870ce73bf1534db2a5ac6c551e51457f30fb44ea9633faabf5e5f57275fc382): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:48:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:48:46.217524 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:48:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:46.218142 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.244605383Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.244655871Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 systemd[1]: run-utsns-7a60d963\x2d3ae9\x2d45d9\x2d8ae7\x2dbeaf4fd05d49.mount: Deactivated successfully. Feb 23 19:48:49 ip-10-0-136-68 systemd[1]: run-ipcns-7a60d963\x2d3ae9\x2d45d9\x2d8ae7\x2dbeaf4fd05d49.mount: Deactivated successfully. Feb 23 19:48:49 ip-10-0-136-68 systemd[1]: run-netns-7a60d963\x2d3ae9\x2d45d9\x2d8ae7\x2dbeaf4fd05d49.mount: Deactivated successfully. Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.279336268Z" level=info msg="runSandbox: deleting pod ID f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32 from idIndex" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.279379904Z" level=info msg="runSandbox: removing pod sandbox f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.279432876Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.279456470Z" level=info msg="runSandbox: unmounting shmPath for sandbox f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32-userdata-shm.mount: Deactivated successfully. Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.285311311Z" level=info msg="runSandbox: removing pod sandbox from storage: f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.286927406Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:49.286958506Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=8ea61dc9-3531-4b1c-809b-562c8a1fae53 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:49.287184 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:48:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:49.287274 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:48:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:49.287318 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:48:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:49.287402 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f0c0fd4e583c8d122679b5f8430d171c6c5334c2d9b37d1f1bb2cecfa2728c32): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:48:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:48:55.216504 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:48:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:55.216893799Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:55.216957203Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:48:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:55.222402264Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/4a82dea4-fbd1-422c-a56e-f6e6cbab9981 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:48:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:55.222429629Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:48:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:56.292296 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:48:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:56.292571 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:48:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:56.292806 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:48:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:48:56.292841 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:48:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:48:58.216964 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:48:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:58.217401181Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:48:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:58.217469117Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:48:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:58.223143055Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/a29a2680-4ef0-48e4-9b2d-61f68f0b806b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:48:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:58.223171248Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:48:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:48:58.575289810Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=ce2b9bcd-6356-41ae-89bd-687ec5e4097b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:48:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:48:58.575639 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-ncxb7_0976617f-18ed-4a73-a7d8-ac54cf69ab93/csi-driver/37.log" Feb 23 19:49:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:01.217441 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:49:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:01.217485 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.217902141Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.217970431Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.218164352Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=a06f09c0-a3c9-46ad-8c09-6e85f1fa3ecc name=/runtime.v1.ImageService/ImageStatus Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.218357901Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a06f09c0-a3c9-46ad-8c09-6e85f1fa3ecc name=/runtime.v1.ImageService/ImageStatus Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.218971921Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=aaaf40e3-8e91-4e71-80ac-8ce8a1795480 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.219104329Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=aaaf40e3-8e91-4e71-80ac-8ce8a1795480 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.219744027Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=4d4ffb80-cfe8-4794-a349-fa1272c2c22c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.219851521Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.225234055Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6dfa5843-2ea9-4d82-bcb6-29cf48b7e1be Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.225359447Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:49:01 ip-10-0-136-68 systemd[1]: Started crio-conmon-bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e.scope. Feb 23 19:49:01 ip-10-0-136-68 systemd[1]: Started libcontainer container bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e. Feb 23 19:49:01 ip-10-0-136-68 conmon[19017]: conmon bf5bdd059c782170979c : Failed to write to cgroup.event_control Operation not supported Feb 23 19:49:01 ip-10-0-136-68 systemd[1]: crio-conmon-bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e.scope: Deactivated successfully. Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.367725729Z" level=info msg="Created container bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=4d4ffb80-cfe8-4794-a349-fa1272c2c22c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.368321127Z" level=info msg="Starting container: bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e" id=dab9d777-a57f-45c7-8101-b534f04ec09a name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:49:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:01.375887085Z" level=info msg="Started container" PID=19029 containerID=bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=dab9d777-a57f-45c7-8101-b534f04ec09a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:49:01 ip-10-0-136-68 systemd[1]: crio-bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e.scope: Deactivated successfully. Feb 23 19:49:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:05.792933862Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=ef54bfc9-fb4e-4e12-bb19-33e17a5014a1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:49:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:05.793886 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e} Feb 23 19:49:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:12.217163 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:12.217741 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:12.218044 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:12.218107 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:49:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:14.873099 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:49:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:14.873168 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:17.234884609Z" level=info msg="NetworkStart: stopping network for sandbox 02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:17.235073980Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/5bedf673-a87f-4970-abea-74775e1b15ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:17.235103101Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:17.235110752Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:49:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:17.235118285Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:49:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:24.872235 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:49:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:24.872309 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.252736522Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.252783167Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 systemd[1]: run-utsns-efa96f1f\x2d09ed\x2d4291\x2dac05\x2ddfabdb8b3570.mount: Deactivated successfully. Feb 23 19:49:26 ip-10-0-136-68 systemd[1]: run-ipcns-efa96f1f\x2d09ed\x2d4291\x2dac05\x2ddfabdb8b3570.mount: Deactivated successfully. Feb 23 19:49:26 ip-10-0-136-68 systemd[1]: run-netns-efa96f1f\x2d09ed\x2d4291\x2dac05\x2ddfabdb8b3570.mount: Deactivated successfully. Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.283329928Z" level=info msg="runSandbox: deleting pod ID ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1 from idIndex" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.283367342Z" level=info msg="runSandbox: removing pod sandbox ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.283411824Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.283431988Z" level=info msg="runSandbox: unmounting shmPath for sandbox ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1-userdata-shm.mount: Deactivated successfully. Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.288322271Z" level=info msg="runSandbox: removing pod sandbox from storage: ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.290627817Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:26.290683948Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=046e9469-55a7-404c-a8ca-07b423c5d01f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:26.290874 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:26.290948 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:26.290985 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:26.291069 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ee73ba71d62ce382537bc750d72530dc7f8b17f37ed4794c99088c90732ca6d1): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:26.291769 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:26.292040 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:26.292397 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:26.292445 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:49:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:34.872471 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:49:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:34.872525 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:49:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:40.217070 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.217538883Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.217622457Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.223320044Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/2be46e3c-4c56-4b9f-8042-6dd4343fe4df Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.223351107Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.235829355Z" level=info msg="NetworkStart: stopping network for sandbox 46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.235935457Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/4a82dea4-fbd1-422c-a56e-f6e6cbab9981 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.235973954Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.235987174Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:49:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:40.235997467Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:49:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:43.234559570Z" level=info msg="NetworkStart: stopping network for sandbox e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:43.234679022Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/a29a2680-4ef0-48e4-9b2d-61f68f0b806b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:49:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:43.234706903Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:49:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:43.234716362Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:49:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:43.234725831Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:49:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:44.872735 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:49:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:44.872797 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:49:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:46.239978395Z" level=info msg="NetworkStart: stopping network for sandbox b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:49:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:46.240118437Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6dfa5843-2ea9-4d82-bcb6-29cf48b7e1be Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:49:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:46.240160156Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:49:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:46.240170491Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:49:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:46.240180672Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:49:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:54.872773 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:49:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:54.872837 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:49:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:54.872864 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:49:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:54.873413 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:49:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:49:54.873600 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e" gracePeriod=30 Feb 23 19:49:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:54.873834725Z" level=info msg="Stopping container: bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e (timeout: 30s)" id=a80d7a16-5b8d-45e2-9200-1507bac74c1b name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:49:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:56.292490 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:56.292763 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:56.292974 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:49:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:49:56.292997 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:49:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:49:58.634961308Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=a80d7a16-5b8d-45e2-9200-1507bac74c1b name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:49:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-226531ca73392cf8dd993f490081e2e507af39defd7fd8ffda7374b545bfc6d5-merged.mount: Deactivated successfully. Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.244826038Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.244875888Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 systemd[1]: run-utsns-5bedf673\x2da87f\x2d4970\x2dabea\x2d74775e1b15ce.mount: Deactivated successfully. Feb 23 19:50:02 ip-10-0-136-68 systemd[1]: run-ipcns-5bedf673\x2da87f\x2d4970\x2dabea\x2d74775e1b15ce.mount: Deactivated successfully. Feb 23 19:50:02 ip-10-0-136-68 systemd[1]: run-netns-5bedf673\x2da87f\x2d4970\x2dabea\x2d74775e1b15ce.mount: Deactivated successfully. Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.278331869Z" level=info msg="runSandbox: deleting pod ID 02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d from idIndex" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.278365770Z" level=info msg="runSandbox: removing pod sandbox 02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.278389780Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.278419491Z" level=info msg="runSandbox: unmounting shmPath for sandbox 02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d-userdata-shm.mount: Deactivated successfully. Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.284302218Z" level=info msg="runSandbox: removing pod sandbox from storage: 02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.285875068Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.285902197Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=6efbbb45-531b-4590-ba1a-eae78403f8d6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:02.286085 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:50:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:02.286152 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:50:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:02.286191 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:50:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:02.286336 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(02127529b8416d6efde98a28a76731668d396a7d38472725361aec47cb3b072d): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.412952652Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=a80d7a16-5b8d-45e2-9200-1507bac74c1b name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.415268285Z" level=info msg="Stopped container bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=a80d7a16-5b8d-45e2-9200-1507bac74c1b name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.415976983Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=53b0a098-ff67-405f-a5c0-cc756c25cf10 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.416139716Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=53b0a098-ff67-405f-a5c0-cc756c25cf10 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.416715612Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=9b8ac897-f68d-4ad3-af64-defe0f618544 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.416874058Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=9b8ac897-f68d-4ad3-af64-defe0f618544 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.417530459Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=73288adb-f51a-401c-aaab-2e9b9a39533c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.417652037Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:50:02 ip-10-0-136-68 systemd[1]: Started crio-conmon-2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc.scope. Feb 23 19:50:02 ip-10-0-136-68 systemd[1]: Started libcontainer container 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc. Feb 23 19:50:02 ip-10-0-136-68 conmon[19171]: conmon 2fbba05fe01cc81ad257 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:50:02 ip-10-0-136-68 systemd[1]: crio-conmon-2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc.scope: Deactivated successfully. Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.548473726Z" level=info msg="Created container 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=73288adb-f51a-401c-aaab-2e9b9a39533c name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.548913665Z" level=info msg="Starting container: 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" id=89d759a4-63c7-4360-90f4-001fb05f7b23 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.555501044Z" level=info msg="Started container" PID=19183 containerID=2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=89d759a4-63c7-4360-90f4-001fb05f7b23 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:50:02 ip-10-0-136-68 systemd[1]: crio-2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc.scope: Deactivated successfully. Feb 23 19:50:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:02.702030010Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=bc010fb8-74ad-4ffe-be08-1ae37c685651 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:50:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:06.452141016Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=3d3efc4f-fdf3-4a0e-8378-3172696f289c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:50:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:06.453001 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e" exitCode=-1 Feb 23 19:50:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:06.453041 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e} Feb 23 19:50:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:06.453132 2199 scope.go:115] "RemoveContainer" containerID="b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" Feb 23 19:50:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:10.202119343Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=dddabaf4-7bc5-4b47-9051-f6f64f40305e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:50:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:11.205384786Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=77bc00ad-cbe2-40f5-be9a-92cd9fd8b56b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:50:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:13.964151147Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=cc3ea40f-a772-4773-b9f4-2e67dc2de065 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:50:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:13.964729876Z" level=info msg="Removing container: b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d" id=86bd16d3-2590-4831-ab9e-722e41634d2a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:50:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:14.943172452Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=95d6c035-bc39-42fe-9556-0d2e75c6eba1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:50:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:14.944112 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc} Feb 23 19:50:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:14.944597 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:50:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:14.944882293Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:14.944938098Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:50:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:14.953629262Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1663d569-0b9a-4d7d-a03c-f3a0b8e3b479 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:50:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:14.953657359Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:50:15 ip-10-0-136-68 NetworkManager[1177]: [1677181815.0013] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 19:50:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:17.725067074Z" level=warning msg="Failed to find container exit file for b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: timed out waiting for the condition" id=86bd16d3-2590-4831-ab9e-722e41634d2a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:50:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:17.750224531Z" level=info msg="Removed container b1c11a9f34ac1de41d596751975e78d419ff2b840558c83020663936212e437d: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=86bd16d3-2590-4831-ab9e-722e41634d2a name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:20.245970189Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=99a4e870-76a0-4820-9d3b-2f332d02fcaf name=/runtime.v1.ImageService/ImageStatus Feb 23 19:50:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:20.246231449Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=99a4e870-76a0-4820-9d3b-2f332d02fcaf name=/runtime.v1.ImageService/ImageStatus Feb 23 19:50:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:21.217628 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:21.217977 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:21.218215 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:21.218270 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:50:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:21.701226909Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=da5be4d9-5a04-4187-b3af-84e50595fa12 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:24.872656 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:50:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:24.872707 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.235430389Z" level=info msg="NetworkStart: stopping network for sandbox bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.235592509Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/2be46e3c-4c56-4b9f-8042-6dd4343fe4df Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.235622573Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.235630400Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.235637109Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.244712506Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.244791720Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 systemd[1]: run-utsns-4a82dea4\x2dfbd1\x2d422c\x2da56e\x2df6e6cbab9981.mount: Deactivated successfully. Feb 23 19:50:25 ip-10-0-136-68 systemd[1]: run-ipcns-4a82dea4\x2dfbd1\x2d422c\x2da56e\x2df6e6cbab9981.mount: Deactivated successfully. Feb 23 19:50:25 ip-10-0-136-68 systemd[1]: run-netns-4a82dea4\x2dfbd1\x2d422c\x2da56e\x2df6e6cbab9981.mount: Deactivated successfully. Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.262330672Z" level=info msg="runSandbox: deleting pod ID 46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f from idIndex" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.262367796Z" level=info msg="runSandbox: removing pod sandbox 46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.262405338Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.262422939Z" level=info msg="runSandbox: unmounting shmPath for sandbox 46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f-userdata-shm.mount: Deactivated successfully. Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.268332177Z" level=info msg="runSandbox: removing pod sandbox from storage: 46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.269870328Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:25.269898542Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=9ec4b29a-7969-44db-8bf0-1f7641f3f481 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:25.270091 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:50:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:25.270145 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:50:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:25.270168 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:50:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:25.270219 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(46c1f06ca0843e4d6bcbff8582468478e4468cad7a6e1d453711f20fa472349f): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:50:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:26.292533 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:26.292817 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:26.293065 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:26.293101 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.243808426Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.243861443Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 systemd[1]: run-utsns-a29a2680\x2d4ef0\x2d48e4\x2d9b2d\x2d61f68f0b806b.mount: Deactivated successfully. Feb 23 19:50:28 ip-10-0-136-68 systemd[1]: run-ipcns-a29a2680\x2d4ef0\x2d48e4\x2d9b2d\x2d61f68f0b806b.mount: Deactivated successfully. Feb 23 19:50:28 ip-10-0-136-68 systemd[1]: run-netns-a29a2680\x2d4ef0\x2d48e4\x2d9b2d\x2d61f68f0b806b.mount: Deactivated successfully. Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.263310886Z" level=info msg="runSandbox: deleting pod ID e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6 from idIndex" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.263343532Z" level=info msg="runSandbox: removing pod sandbox e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.263367951Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.263379848Z" level=info msg="runSandbox: unmounting shmPath for sandbox e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6-userdata-shm.mount: Deactivated successfully. Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.269299372Z" level=info msg="runSandbox: removing pod sandbox from storage: e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.270925523Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:28.270958579Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=afac797c-4853-4d7c-83a0-3455764070c9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:28.271132 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:50:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:28.271179 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:50:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:28.271205 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:50:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:28.271277 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e20a6c440da2ca598c8b616ae33fe2956d1d795a211b2f6241a01c737e9c00e6): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.249331532Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.249379380Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 systemd[1]: run-utsns-6dfa5843\x2d2ea9\x2d4d82\x2dbcb6\x2d29cf48b7e1be.mount: Deactivated successfully. Feb 23 19:50:31 ip-10-0-136-68 systemd[1]: run-ipcns-6dfa5843\x2d2ea9\x2d4d82\x2dbcb6\x2d29cf48b7e1be.mount: Deactivated successfully. Feb 23 19:50:31 ip-10-0-136-68 systemd[1]: run-netns-6dfa5843\x2d2ea9\x2d4d82\x2dbcb6\x2d29cf48b7e1be.mount: Deactivated successfully. Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.292319908Z" level=info msg="runSandbox: deleting pod ID b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d from idIndex" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.292359973Z" level=info msg="runSandbox: removing pod sandbox b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.292406592Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.292430012Z" level=info msg="runSandbox: unmounting shmPath for sandbox b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d-userdata-shm.mount: Deactivated successfully. Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.296310554Z" level=info msg="runSandbox: removing pod sandbox from storage: b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.297853726Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:31.297882785Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=884a7308-c721-461d-90bd-b7290cbff50c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:31.298064 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:50:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:31.298113 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:50:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:31.298135 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:50:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:31.298195 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(b5353873ed10df4b9dcc08fb17c7b268c430cedc5fb4713e85dbdea24734873d): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:50:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:34.872649 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:50:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:34.872834 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:50:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:39.216770 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:50:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:39.217143720Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:39.217202234Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:50:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:39.223077198Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/a2440055-0a88-4b25-88dd-281fe555ddf3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:50:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:39.223103334Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:50:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:43.216513 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:43.216936130Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:43.217004257Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:43.222439160Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/90878efc-0d19-4a1b-8369-38aea19e11ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:50:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:43.222463999Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:50:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:44.872238 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:50:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:44.872330 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:50:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:46.217022 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:50:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:46.217430470Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:46.217486722Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:50:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:46.223002799Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7222c786-1cc3-4389-912f-b2901e34163f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:50:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:46.223030142Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:50:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:54.872449 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:50:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:50:54.872509 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:50:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:56.291966 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:56.292324 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:56.292580 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:50:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:50:56.292607 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:50:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:59.967169504Z" level=info msg="NetworkStart: stopping network for sandbox 2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:50:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:59.967312257Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/1663d569-0b9a-4d7d-a03c-f3a0b8e3b479 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:50:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:59.967340455Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:50:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:59.967347574Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:50:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:50:59.967353517Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:51:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:04.872557 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:51:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:04.872607 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:51:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:04.872630 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:51:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:04.873134 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:51:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:04.873329 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" gracePeriod=30 Feb 23 19:51:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:04.873515872Z" level=info msg="Stopping container: 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc (timeout: 30s)" id=1d9ca1c8-be55-41c8-81c9-2a3e2415b40a name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:51:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:08.632988229Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=1d9ca1c8-be55-41c8-81c9-2a3e2415b40a name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:51:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8c8b3ba5e7b97bc46ffa12baa76de9ea12920f8d4cafa6369c00b3676a793bf5-merged.mount: Deactivated successfully. Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.245111448Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.245160426Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 systemd[1]: run-utsns-2be46e3c\x2d4c56\x2d4b9f\x2d8042\x2d6dd4343fe4df.mount: Deactivated successfully. Feb 23 19:51:10 ip-10-0-136-68 systemd[1]: run-ipcns-2be46e3c\x2d4c56\x2d4b9f\x2d8042\x2d6dd4343fe4df.mount: Deactivated successfully. Feb 23 19:51:10 ip-10-0-136-68 systemd[1]: run-netns-2be46e3c\x2d4c56\x2d4b9f\x2d8042\x2d6dd4343fe4df.mount: Deactivated successfully. Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.275327065Z" level=info msg="runSandbox: deleting pod ID bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507 from idIndex" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.275362470Z" level=info msg="runSandbox: removing pod sandbox bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.275400655Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.275428750Z" level=info msg="runSandbox: unmounting shmPath for sandbox bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507-userdata-shm.mount: Deactivated successfully. Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.281301915Z" level=info msg="runSandbox: removing pod sandbox from storage: bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.282794334Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:10.282823756Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=32f36340-c026-4a58-b8c3-410f6537adce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:10.283011 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:51:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:10.283061 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:51:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:10.283084 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:51:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:10.283146 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(bcb8fe5a248661a80f34e70f9990192703aaa9be7ef8e437a789604cadcfc507): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:51:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:12.408136014Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=1d9ca1c8-be55-41c8-81c9-2a3e2415b40a name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:51:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:12.410306109Z" level=info msg="Stopped container 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=1d9ca1c8-be55-41c8-81c9-2a3e2415b40a name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:51:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:12.410744 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:51:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:12.524321265Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=dc795e41-6a63-43b4-9c18-e48338f5f31c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:51:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:16.274112085Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=e92d847b-f9c5-4010-ba75-36e54e833105 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:51:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:16.275055 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" exitCode=-1 Feb 23 19:51:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:16.275093 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc} Feb 23 19:51:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:16.275127 2199 scope.go:115] "RemoveContainer" containerID="bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e" Feb 23 19:51:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:17.280444 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:51:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:17.280858 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:51:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:20.025076718Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=038af1d9-0c8d-4ae2-8001-0c50ebccc047 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:51:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:21.216591 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:51:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:21.216975768Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:21.217030777Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:51:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:21.222046753Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f54380ca-2eee-4d50-bf76-568409621fed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:51:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:21.222070262Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:51:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:23.785971186Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=086aff82-25d6-4be1-ac28-aaa8fd33d692 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:51:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:23.786511583Z" level=info msg="Removing container: bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e" id=fa9d0441-6ca8-4f61-8d7e-b6176bf8b11e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:51:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:24.234972341Z" level=info msg="NetworkStart: stopping network for sandbox e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:24.235079185Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/a2440055-0a88-4b25-88dd-281fe555ddf3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:51:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:24.235108242Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:51:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:24.235114959Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:51:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:24.235122725Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:26.292309 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:26.292613 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:26.293030 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:26.293066 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:51:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:27.544894998Z" level=warning msg="Failed to find container exit file for bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: timed out waiting for the condition" id=fa9d0441-6ca8-4f61-8d7e-b6176bf8b11e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:51:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:27.557503926Z" level=info msg="Removed container bf5bdd059c782170979ce3197bdd59058e08e01da282b843dd61308635e1241e: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=fa9d0441-6ca8-4f61-8d7e-b6176bf8b11e name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:51:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:28.216768 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:51:28 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:28.217370 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:51:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:28.233982362Z" level=info msg="NetworkStart: stopping network for sandbox 6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:28.234090082Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/90878efc-0d19-4a1b-8369-38aea19e11ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:51:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:28.234125570Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:51:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:28.234133922Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:51:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:28.234140512Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:51:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:31.234952224Z" level=info msg="NetworkStart: stopping network for sandbox a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:31.235077135Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7222c786-1cc3-4389-912f-b2901e34163f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:51:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:31.235105809Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:51:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:31.235113235Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:51:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:31.235123729Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:51:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:32.057071485Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=446c4d61-a3fe-4a0c-8256-ef7d42a2c497 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:51:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:39.216663 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:39.216934 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:39.217190 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:39.217228 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:51:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:42.216678 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:51:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:42.217283 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:44.977471045Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:44.977523951Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:44 ip-10-0-136-68 systemd[1]: run-utsns-1663d569\x2d0b9a\x2d4d7d\x2da03c\x2df3a0b8e3b479.mount: Deactivated successfully. Feb 23 19:51:44 ip-10-0-136-68 systemd[1]: run-ipcns-1663d569\x2d0b9a\x2d4d7d\x2da03c\x2df3a0b8e3b479.mount: Deactivated successfully. Feb 23 19:51:44 ip-10-0-136-68 systemd[1]: run-netns-1663d569\x2d0b9a\x2d4d7d\x2da03c\x2df3a0b8e3b479.mount: Deactivated successfully. Feb 23 19:51:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:45.010339453Z" level=info msg="runSandbox: deleting pod ID 2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be from idIndex" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:45.010386349Z" level=info msg="runSandbox: removing pod sandbox 2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:45.010433587Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:45.010452194Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:45 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be-userdata-shm.mount: Deactivated successfully. Feb 23 19:51:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:45.016321273Z" level=info msg="runSandbox: removing pod sandbox from storage: 2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:45.017931826Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:51:45.017968347Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=ea4d0f9f-e066-42cb-9ef8-989b248ae069 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:51:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:45.018218 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:51:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:45.018312 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:51:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:45.018350 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:51:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:45.018437 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2cb40b1b0601f49e4e0d35349c7b31fda4a2c39ef335e9832420a3a3cee058be): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:51:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:51:53.217002 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:51:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:53.217634 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:56.292202 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:56.292474 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:56.292687 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:51:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:51:56.292715 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:52:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:00.216884 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:52:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:00.217340565Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:00.217411203Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:52:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:00.222515929Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/6db70d3d-f57f-4d97-b469-8d7941a586d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:52:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:00.222541350Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:52:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:06.216830 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:52:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:06.217427 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:06.235227109Z" level=info msg="NetworkStart: stopping network for sandbox d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:06.235368322Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f54380ca-2eee-4d50-bf76-568409621fed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:06.235398231Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:06.235408260Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:52:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:06.235415192Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.245094006Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.245139689Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 systemd[1]: run-utsns-a2440055\x2d0a88\x2d4b25\x2d88dd\x2d281fe555ddf3.mount: Deactivated successfully. Feb 23 19:52:09 ip-10-0-136-68 systemd[1]: run-ipcns-a2440055\x2d0a88\x2d4b25\x2d88dd\x2d281fe555ddf3.mount: Deactivated successfully. Feb 23 19:52:09 ip-10-0-136-68 systemd[1]: run-netns-a2440055\x2d0a88\x2d4b25\x2d88dd\x2d281fe555ddf3.mount: Deactivated successfully. Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.276335205Z" level=info msg="runSandbox: deleting pod ID e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28 from idIndex" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.276374592Z" level=info msg="runSandbox: removing pod sandbox e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.276419315Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.276432528Z" level=info msg="runSandbox: unmounting shmPath for sandbox e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28-userdata-shm.mount: Deactivated successfully. Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.282310043Z" level=info msg="runSandbox: removing pod sandbox from storage: e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.283800520Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:09.283830441Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=03d8f3ab-f1ae-4eeb-a126-a26de7a03eda name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:09.284032 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:52:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:09.284086 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:52:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:09.284114 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:52:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:09.284168 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(e17e0138388cec828832b3a06084ed561c6e97de2c548ec175a133d7d62cfd28): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.243361670Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.243407657Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 systemd[1]: run-utsns-90878efc\x2d0d19\x2d4a1b\x2d8369\x2d38aea19e11ce.mount: Deactivated successfully. Feb 23 19:52:13 ip-10-0-136-68 systemd[1]: run-ipcns-90878efc\x2d0d19\x2d4a1b\x2d8369\x2d38aea19e11ce.mount: Deactivated successfully. Feb 23 19:52:13 ip-10-0-136-68 systemd[1]: run-netns-90878efc\x2d0d19\x2d4a1b\x2d8369\x2d38aea19e11ce.mount: Deactivated successfully. Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.274330631Z" level=info msg="runSandbox: deleting pod ID 6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e from idIndex" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.274366967Z" level=info msg="runSandbox: removing pod sandbox 6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.274394799Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.274419696Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e-userdata-shm.mount: Deactivated successfully. Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.280309700Z" level=info msg="runSandbox: removing pod sandbox from storage: 6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.281741247Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:13.281767904Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=2dfd145c-0dc2-460d-a938-324d717e8ac6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:13.281943 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:52:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:13.282009 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:52:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:13.282051 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:52:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:13.282147 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6dc39cf6bf67b44c9f38980a8fb6dea7bdd559322f666ae697e24437676fea5e): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.245459117Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.245513090Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 systemd[1]: run-utsns-7222c786\x2d1cc3\x2d4389\x2d912f\x2db2901e34163f.mount: Deactivated successfully. Feb 23 19:52:16 ip-10-0-136-68 systemd[1]: run-ipcns-7222c786\x2d1cc3\x2d4389\x2d912f\x2db2901e34163f.mount: Deactivated successfully. Feb 23 19:52:16 ip-10-0-136-68 systemd[1]: run-netns-7222c786\x2d1cc3\x2d4389\x2d912f\x2db2901e34163f.mount: Deactivated successfully. Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.261326729Z" level=info msg="runSandbox: deleting pod ID a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2 from idIndex" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.261362738Z" level=info msg="runSandbox: removing pod sandbox a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.261389039Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.261402027Z" level=info msg="runSandbox: unmounting shmPath for sandbox a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2-userdata-shm.mount: Deactivated successfully. Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.267301673Z" level=info msg="runSandbox: removing pod sandbox from storage: a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.268796908Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:16.268830334Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=6443aa0f-b65b-47c1-ab23-05125b959a14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:16.269024 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:52:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:16.269079 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:52:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:16.269105 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:52:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:16.269172 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a2560dad0f46f6b0d8fbbd9b1fd340ee5fc43df6058440d8ee9c0605adf4c2f2): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:52:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:18.216408 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:52:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:18.216808 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:52:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:22.216764 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:52:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:22.217102575Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:22.217162308Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:52:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:22.223092516Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/f71cc2fa-c532-4e52-aef1-9709d4527360 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:52:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:22.223126638Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:26.292649 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:26.292860 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:26.293097 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:26.293133 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:52:28 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:28.216899 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:52:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:28.217201911Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:28.217292414Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:52:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:28.223388580Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f231e87a-1c4b-48e8-9d03-eecc7eb32b9d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:52:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:28.223415601Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:52:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:29.216547 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:52:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:29.216853655Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:29.216905693Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:52:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:29.222345581Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6863de88-8374-426f-ada3-b2ab1dcdaabb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:52:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:29.222371796Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:52:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:33.216946 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:52:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:33.217541 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:52:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:44.216714 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:52:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:44.217323 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:52:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:45.235681930Z" level=info msg="NetworkStart: stopping network for sandbox 10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:45.235803467Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/6db70d3d-f57f-4d97-b469-8d7941a586d4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:52:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:45.235842191Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:52:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:45.235853777Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:52:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:45.235863753Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:52:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:46.217079 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:46.217944 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:46.218392 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:46.218499 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.245657094Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.245713141Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 systemd[1]: run-utsns-f54380ca\x2d2eee\x2d4d50\x2dbf76\x2d568409621fed.mount: Deactivated successfully. Feb 23 19:52:51 ip-10-0-136-68 systemd[1]: run-ipcns-f54380ca\x2d2eee\x2d4d50\x2dbf76\x2d568409621fed.mount: Deactivated successfully. Feb 23 19:52:51 ip-10-0-136-68 systemd[1]: run-netns-f54380ca\x2d2eee\x2d4d50\x2dbf76\x2d568409621fed.mount: Deactivated successfully. Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.263319666Z" level=info msg="runSandbox: deleting pod ID d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb from idIndex" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.263355059Z" level=info msg="runSandbox: removing pod sandbox d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.263392236Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.263411194Z" level=info msg="runSandbox: unmounting shmPath for sandbox d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb-userdata-shm.mount: Deactivated successfully. Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.268299287Z" level=info msg="runSandbox: removing pod sandbox from storage: d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.269864322Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:52:51.269893585Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=6eae12ab-5ff4-4571-a2cf-f0631b091021 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:52:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:51.270099 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:52:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:51.270150 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:52:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:51.270179 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:52:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:51.270235 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(d367658490802a7bdcf6b58cb495fb2f13e27cf78f0a632fda610cde8dca9cfb): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:56.292521 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:56.292851 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:56.293081 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:52:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:56.293110 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:52:58 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:52:58.216640 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:52:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:52:58.217157 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:53:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:53:02.217014 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:53:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:02.217322090Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:02.217377890Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:53:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:02.225947861Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/cb3bb782-92ad-47e6-b5fa-368aa4236629 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:53:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:02.226094974Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:53:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:07.235034947Z" level=info msg="NetworkStart: stopping network for sandbox 4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:07.235160677Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/f71cc2fa-c532-4e52-aef1-9709d4527360 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:53:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:07.235191009Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:53:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:07.235198686Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:53:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:07.235205528Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:53:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:53:12.217046 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:53:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:12.217649 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:53:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:13.236599767Z" level=info msg="NetworkStart: stopping network for sandbox 88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:13.236722999Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f231e87a-1c4b-48e8-9d03-eecc7eb32b9d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:53:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:13.236753902Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:53:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:13.236761817Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:53:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:13.236769097Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:53:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:14.233768042Z" level=info msg="NetworkStart: stopping network for sandbox 1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:14.233879226Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6863de88-8374-426f-ada3-b2ab1dcdaabb Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:53:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:14.233906833Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:53:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:14.233914465Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:53:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:14.233923283Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:53:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:53:24.216989 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:53:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:24.217392 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:26.292418 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:26.292658 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:26.292861 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:53:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:26.292891 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.246167411Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.246211500Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 systemd[1]: run-utsns-6db70d3d\x2df57f\x2d4d97\x2db469\x2d8d7941a586d4.mount: Deactivated successfully. Feb 23 19:53:30 ip-10-0-136-68 systemd[1]: run-ipcns-6db70d3d\x2df57f\x2d4d97\x2db469\x2d8d7941a586d4.mount: Deactivated successfully. Feb 23 19:53:30 ip-10-0-136-68 systemd[1]: run-netns-6db70d3d\x2df57f\x2d4d97\x2db469\x2d8d7941a586d4.mount: Deactivated successfully. Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.269327598Z" level=info msg="runSandbox: deleting pod ID 10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f from idIndex" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.269361672Z" level=info msg="runSandbox: removing pod sandbox 10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.269384504Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.269396375Z" level=info msg="runSandbox: unmounting shmPath for sandbox 10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f-userdata-shm.mount: Deactivated successfully. Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.276299266Z" level=info msg="runSandbox: removing pod sandbox from storage: 10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.277901495Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:30.277929049Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=032daf8c-224a-48a3-b28e-3690e4056551 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:30.278082 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:53:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:30.278126 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:53:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:30.278148 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:53:30 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:30.278200 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(10f75f13ec75d5222ecf6f487c6ea8d3c07ad8bdc4f58ab03ccf0a1c0f5a2a6f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:53:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:53:39.217157 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:53:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:39.217590 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:53:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:53:45.216736 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:53:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:45.217156217Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:45.217209593Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:53:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:45.222622831Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/112bc7f4-819e-4362-8c74-ecb01e1f1ddd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:53:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:45.222658948Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:53:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:47.237857358Z" level=info msg="NetworkStart: stopping network for sandbox 249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:47.237975666Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/cb3bb782-92ad-47e6-b5fa-368aa4236629 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:53:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:47.238009404Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:53:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:47.238017148Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:53:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:47.238024072Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:53:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:53:52.217063 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:53:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:52.217702 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.245366874Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.245413613Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 systemd[1]: run-utsns-f71cc2fa\x2dc532\x2d4e52\x2daef1\x2d9709d4527360.mount: Deactivated successfully. Feb 23 19:53:52 ip-10-0-136-68 systemd[1]: run-ipcns-f71cc2fa\x2dc532\x2d4e52\x2daef1\x2d9709d4527360.mount: Deactivated successfully. Feb 23 19:53:52 ip-10-0-136-68 systemd[1]: run-netns-f71cc2fa\x2dc532\x2d4e52\x2daef1\x2d9709d4527360.mount: Deactivated successfully. Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.271333978Z" level=info msg="runSandbox: deleting pod ID 4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6 from idIndex" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.271371358Z" level=info msg="runSandbox: removing pod sandbox 4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.271399367Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.271426956Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6-userdata-shm.mount: Deactivated successfully. Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.277314141Z" level=info msg="runSandbox: removing pod sandbox from storage: 4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.279453601Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:52.279498715Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=a3d85ce8-653a-4498-8bcf-7e49dfe7a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:52.281285 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:53:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:52.281435 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:53:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:52.281470 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:53:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:52.281548 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4eb123ddbd85bd783980c0d7ea018f3eff62594b8cfa792fe0ea73801aed4de6): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:56.292028 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:56.292278 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:56.292480 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:53:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:56.292505 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.246116783Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.246161179Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 systemd[1]: run-utsns-f231e87a\x2d1c4b\x2d48e8\x2d9d03\x2deecc7eb32b9d.mount: Deactivated successfully. Feb 23 19:53:58 ip-10-0-136-68 systemd[1]: run-ipcns-f231e87a\x2d1c4b\x2d48e8\x2d9d03\x2deecc7eb32b9d.mount: Deactivated successfully. Feb 23 19:53:58 ip-10-0-136-68 systemd[1]: run-netns-f231e87a\x2d1c4b\x2d48e8\x2d9d03\x2deecc7eb32b9d.mount: Deactivated successfully. Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.265326269Z" level=info msg="runSandbox: deleting pod ID 88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e from idIndex" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.265357563Z" level=info msg="runSandbox: removing pod sandbox 88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.265383212Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.265399910Z" level=info msg="runSandbox: unmounting shmPath for sandbox 88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e-userdata-shm.mount: Deactivated successfully. Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.271306452Z" level=info msg="runSandbox: removing pod sandbox from storage: 88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.272753523Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:58.272782547Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=6e348176-39c7-4932-9cc4-ab64c3d3dadd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:58.272962 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:53:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:58.273010 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:53:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:58.273032 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:53:58 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:58.273086 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(88b244b504f9f116f354e544819693a0893eeb46eda2677e4054636fdeb6656e): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.243136906Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.243190535Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 systemd[1]: run-utsns-6863de88\x2d8374\x2d426f\x2dada3\x2db2ab1dcdaabb.mount: Deactivated successfully. Feb 23 19:53:59 ip-10-0-136-68 systemd[1]: run-ipcns-6863de88\x2d8374\x2d426f\x2dada3\x2db2ab1dcdaabb.mount: Deactivated successfully. Feb 23 19:53:59 ip-10-0-136-68 systemd[1]: run-netns-6863de88\x2d8374\x2d426f\x2dada3\x2db2ab1dcdaabb.mount: Deactivated successfully. Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.281326809Z" level=info msg="runSandbox: deleting pod ID 1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df from idIndex" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.281360373Z" level=info msg="runSandbox: removing pod sandbox 1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.281396796Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.281417337Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df-userdata-shm.mount: Deactivated successfully. Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.290303401Z" level=info msg="runSandbox: removing pod sandbox from storage: 1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.291759080Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:53:59.291793265Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=a280e6dc-d005-4a8e-a925-dfd88ea53b36 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:53:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:59.291968 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:53:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:59.292017 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:53:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:59.292041 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:53:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:53:59.292099 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1b917c2fa9ae47e4dc599a3ebfc9ae09276bc11c6d19cb79e5d864c5714b34df): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:54:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:54:03.216514 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:03.216918432Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:03.216986421Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:03.222753485Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/117995eb-07a8-42c7-9e72-65c86349332a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:54:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:03.222792648Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:54:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:54:04.217405 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:54:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:04.217814 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:54:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:10.217467 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:10.218227 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:10.218609 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:10.218733 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:54:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:54:12.216862 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:54:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:12.217299806Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:12.217370503Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:54:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:12.223103957Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/8e6a5cbf-98a1-4dfc-b9c8-ffab02c71d41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:54:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:12.223130400Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:54:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:54:13.216367 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:54:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:13.216722423Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:13.216787158Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:54:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:13.222036953Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/809e73fa-e449-4496-9df6-6bc3381f2647 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:54:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:13.222063365Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:54:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:54:19.217027 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:54:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:19.217478 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:26.291922 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:26.292134 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:26.292408 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:26.292443 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:30.233803633Z" level=info msg="NetworkStart: stopping network for sandbox e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:30.233930179Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/112bc7f4-819e-4362-8c74-ecb01e1f1ddd Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:30.233958963Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:30.233966851Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:54:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:30.233975557Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.247210911Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.247291374Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 systemd[1]: run-utsns-cb3bb782\x2d92ad\x2d47e6\x2db5fa\x2d368aa4236629.mount: Deactivated successfully. Feb 23 19:54:32 ip-10-0-136-68 systemd[1]: run-ipcns-cb3bb782\x2d92ad\x2d47e6\x2db5fa\x2d368aa4236629.mount: Deactivated successfully. Feb 23 19:54:32 ip-10-0-136-68 systemd[1]: run-netns-cb3bb782\x2d92ad\x2d47e6\x2db5fa\x2d368aa4236629.mount: Deactivated successfully. Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.272333588Z" level=info msg="runSandbox: deleting pod ID 249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519 from idIndex" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.272568944Z" level=info msg="runSandbox: removing pod sandbox 249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.272610143Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.272632690Z" level=info msg="runSandbox: unmounting shmPath for sandbox 249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519-userdata-shm.mount: Deactivated successfully. Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.280300937Z" level=info msg="runSandbox: removing pod sandbox from storage: 249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.281909584Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:32.281941501Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=d07b10d4-8803-4e08-b65b-95210d9ccd72 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:32.282181 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:54:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:32.282236 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:54:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:32.282321 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:54:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:32.282404 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(249daeed2d12310c733e5bdcb12aa6d4e0624da1bd28b6168006c62422396519): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:54:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:54:34.216992 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:54:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:34.217615 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:54:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:54:45.216536 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:54:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:45.216954447Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:45.217021728Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:54:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:45.222192741Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/98302f4d-e731-48ef-9787-6b9bb3744c11 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:54:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:45.222219257Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:54:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:48.234989479Z" level=info msg="NetworkStart: stopping network for sandbox 0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:48.235103430Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/117995eb-07a8-42c7-9e72-65c86349332a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:54:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:48.235130375Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:54:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:48.235137881Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:54:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:48.235144119Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:54:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:54:49.216634 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:54:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:49.217029 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:56.291853 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:56.292148 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:56.292373 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:54:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:54:56.292403 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:54:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:57.235268278Z" level=info msg="NetworkStart: stopping network for sandbox 116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:57.235405919Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/8e6a5cbf-98a1-4dfc-b9c8-ffab02c71d41 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:54:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:57.235445367Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:54:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:57.235458376Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:54:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:57.235470566Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:54:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:58.233884233Z" level=info msg="NetworkStart: stopping network for sandbox a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:54:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:58.234000842Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/809e73fa-e449-4496-9df6-6bc3381f2647 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:54:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:58.234028243Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:54:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:58.234035549Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:54:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:54:58.234042399Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:55:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:01.217380 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:55:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:01.217791 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:55:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:12.217341 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:55:12 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:12.217930 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.243945929Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.243997851Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 systemd[1]: run-utsns-112bc7f4\x2d819e\x2d4362\x2d8c74\x2decb01e1f1ddd.mount: Deactivated successfully. Feb 23 19:55:15 ip-10-0-136-68 systemd[1]: run-ipcns-112bc7f4\x2d819e\x2d4362\x2d8c74\x2decb01e1f1ddd.mount: Deactivated successfully. Feb 23 19:55:15 ip-10-0-136-68 systemd[1]: run-netns-112bc7f4\x2d819e\x2d4362\x2d8c74\x2decb01e1f1ddd.mount: Deactivated successfully. Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.264336026Z" level=info msg="runSandbox: deleting pod ID e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506 from idIndex" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.264378991Z" level=info msg="runSandbox: removing pod sandbox e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.264426538Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.264440212Z" level=info msg="runSandbox: unmounting shmPath for sandbox e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506-userdata-shm.mount: Deactivated successfully. Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.275328907Z" level=info msg="runSandbox: removing pod sandbox from storage: e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.276887299Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:15.276918864Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=4187fd47-3ded-4cc3-a7fd-e142dd7bfde0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:15.277152 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:55:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:15.277239 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:55:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:15.277323 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:55:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:15.277402 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(e0ec42adce658ec8d2edb58eed8941fac57c481141126b2264b51bd8d9af3506): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:55:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:20.252171086Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=b2c9e1bd-dcb0-4b64-885b-e1cb160dec53 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:55:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:20.252401103Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=b2c9e1bd-dcb0-4b64-885b-e1cb160dec53 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:55:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:24.217547 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:55:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:24.218141 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:26.292032 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:26.292398 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:26.292637 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:26.292682 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:55:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:29.216837 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:29.217304003Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:29.217371651Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:29.222680944Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ab2bdf87-e765-4bcb-b9cb-8e063ae24613 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:55:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:29.222713262Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:30.234400536Z" level=info msg="NetworkStart: stopping network for sandbox 5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:30.234516716Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/98302f4d-e731-48ef-9787-6b9bb3744c11 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:30.234553227Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:30.234564491Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:55:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:30.234575625Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.244216539Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.244291197Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 systemd[1]: run-utsns-117995eb\x2d07a8\x2d42c7\x2d9e72\x2d65c86349332a.mount: Deactivated successfully. Feb 23 19:55:33 ip-10-0-136-68 systemd[1]: run-ipcns-117995eb\x2d07a8\x2d42c7\x2d9e72\x2d65c86349332a.mount: Deactivated successfully. Feb 23 19:55:33 ip-10-0-136-68 systemd[1]: run-netns-117995eb\x2d07a8\x2d42c7\x2d9e72\x2d65c86349332a.mount: Deactivated successfully. Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.278374642Z" level=info msg="runSandbox: deleting pod ID 0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c from idIndex" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.278445397Z" level=info msg="runSandbox: removing pod sandbox 0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.278491279Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.278511996Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c-userdata-shm.mount: Deactivated successfully. Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.284335932Z" level=info msg="runSandbox: removing pod sandbox from storage: 0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.285979444Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:33.286009176Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=99ba1f63-3c15-4c08-88e6-5d46ea1d90ce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:33.286338 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:55:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:33.286411 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:55:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:33.286448 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:55:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:33.286529 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(0f31463bf5c61743ad0acfabc4f7df99679500426b4972e68b2e1484f290ce1c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:55:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:36.216824 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:55:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:36.217464 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:55:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:39.217659 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:39.217945 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:39.218169 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:39.218194 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.244954739Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.245009031Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 systemd[1]: run-utsns-8e6a5cbf\x2d98a1\x2d4dfc\x2db9c8\x2dffab02c71d41.mount: Deactivated successfully. Feb 23 19:55:42 ip-10-0-136-68 systemd[1]: run-ipcns-8e6a5cbf\x2d98a1\x2d4dfc\x2db9c8\x2dffab02c71d41.mount: Deactivated successfully. Feb 23 19:55:42 ip-10-0-136-68 systemd[1]: run-netns-8e6a5cbf\x2d98a1\x2d4dfc\x2db9c8\x2dffab02c71d41.mount: Deactivated successfully. Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.269326239Z" level=info msg="runSandbox: deleting pod ID 116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd from idIndex" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.269363411Z" level=info msg="runSandbox: removing pod sandbox 116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.269390253Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.269408188Z" level=info msg="runSandbox: unmounting shmPath for sandbox 116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd-userdata-shm.mount: Deactivated successfully. Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.274322188Z" level=info msg="runSandbox: removing pod sandbox from storage: 116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.275876803Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:42.275911885Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=0fb47acb-f60c-4421-bf19-fdc527f44f0f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:42.276120 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:55:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:42.276182 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:55:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:42.276218 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:55:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:42.276321 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(116b0b24eb15d0d6d7e856c9a45c4d0f63d38623f5bebcc74619480cafccbabd): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.243111390Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.243175326Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 systemd[1]: run-utsns-809e73fa\x2de449\x2d4496\x2d9df6\x2d6bc3381f2647.mount: Deactivated successfully. Feb 23 19:55:43 ip-10-0-136-68 systemd[1]: run-ipcns-809e73fa\x2de449\x2d4496\x2d9df6\x2d6bc3381f2647.mount: Deactivated successfully. Feb 23 19:55:43 ip-10-0-136-68 systemd[1]: run-netns-809e73fa\x2de449\x2d4496\x2d9df6\x2d6bc3381f2647.mount: Deactivated successfully. Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.262324949Z" level=info msg="runSandbox: deleting pod ID a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e from idIndex" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.262362247Z" level=info msg="runSandbox: removing pod sandbox a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.262392069Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.262404187Z" level=info msg="runSandbox: unmounting shmPath for sandbox a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e-userdata-shm.mount: Deactivated successfully. Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.269324222Z" level=info msg="runSandbox: removing pod sandbox from storage: a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.270975530Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:43.271004909Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=cb05f234-e424-4674-8f7d-b147b0fecbc6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:43.271208 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:55:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:43.271295 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:55:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:43.271326 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:55:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:43.271384 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(a47d07f3448b8a6421a6a875e9934e93d6feac109f17466053e3809303d1dc3e): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:55:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:46.216642 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:55:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:46.217108231Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:46.217173122Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:55:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:46.223649916Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/1783dab3-8b8c-4150-ae8c-c67c534855f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:55:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:46.223684379Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:55:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:47.216978 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:55:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:47.217398 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:55:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:54.216973 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:55:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:54.217460617Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:54.217530829Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:55:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:54.223077346Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2e3f2600-1d4a-4586-a789-fc7c030ae615 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:55:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:54.223104794Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:55:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:55:55.216472 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:55:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:55.216875639Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:55:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:55.216946853Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:55:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:55.222317647Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/9cf3dc2e-ed7e-4dc6-900f-dc2b5bb4b329 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:55:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:55:55.222343100Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:56.292453 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:56.292733 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:56.292956 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:55:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:55:56.293013 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:56:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:02.217063 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:56:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:02.217654 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:56:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:14.217391 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.218270183Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=e9e9412c-8c02-489e-82e0-99d114d1c0a7 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.218458096Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=e9e9412c-8c02-489e-82e0-99d114d1c0a7 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.219108416Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=8ce7aa51-08ee-4794-99ba-8d778d447bb9 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.219303189Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=8ce7aa51-08ee-4794-99ba-8d778d447bb9 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.220105897Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b35ce28c-435a-4152-b350-be0b50676f68 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.220208639Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.235134445Z" level=info msg="NetworkStart: stopping network for sandbox afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.235239701Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ab2bdf87-e765-4bcb-b9cb-8e063ae24613 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.235458150Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.235470860Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.235481295Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:56:14 ip-10-0-136-68 systemd[1]: Started crio-conmon-9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068.scope. Feb 23 19:56:14 ip-10-0-136-68 systemd[1]: Started libcontainer container 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068. Feb 23 19:56:14 ip-10-0-136-68 conmon[19882]: conmon 9f5e832f8b09b6128891 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:56:14 ip-10-0-136-68 systemd[1]: crio-conmon-9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068.scope: Deactivated successfully. Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.366347315Z" level=info msg="Created container 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b35ce28c-435a-4152-b350-be0b50676f68 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.366932983Z" level=info msg="Starting container: 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068" id=539fee04-4eb8-4a8e-b1c7-dcf7563baba7 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:56:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:14.374479339Z" level=info msg="Started container" PID=19893 containerID=9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=539fee04-4eb8-4a8e-b1c7-dcf7563baba7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:56:14 ip-10-0-136-68 systemd[1]: crio-9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068.scope: Deactivated successfully. Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.244434584Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.244492271Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 systemd[1]: run-utsns-98302f4d\x2de731\x2d48ef\x2d9787\x2d6b9bb3744c11.mount: Deactivated successfully. Feb 23 19:56:15 ip-10-0-136-68 systemd[1]: run-ipcns-98302f4d\x2de731\x2d48ef\x2d9787\x2d6b9bb3744c11.mount: Deactivated successfully. Feb 23 19:56:15 ip-10-0-136-68 systemd[1]: run-netns-98302f4d\x2de731\x2d48ef\x2d9787\x2d6b9bb3744c11.mount: Deactivated successfully. Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.273347031Z" level=info msg="runSandbox: deleting pod ID 5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b from idIndex" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.273392241Z" level=info msg="runSandbox: removing pod sandbox 5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.273437770Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.273454808Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b-userdata-shm.mount: Deactivated successfully. Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.280320147Z" level=info msg="runSandbox: removing pod sandbox from storage: 5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.281894158Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:15.281924718Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=b16ac286-fcdf-453d-b222-102f7e66b861 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:15.282142 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:56:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:15.282355 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:56:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:15.282393 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:56:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:15.282462 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5c69ec37f5911e50e82eab74e69054cd92eb6e1ed5606936ba288d649b5f775b): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:56:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:18.233170885Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=5e1f350d-a6b6-424a-9b31-2eca3c096baa name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:56:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:18.234095 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068} Feb 23 19:56:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:24.872976 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:56:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:24.873043 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:56:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:26.216592 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:26.217021752Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:26.217088958Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:26.223391118Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/41fdc6bd-395f-483c-bd2c-434753c138c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:56:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:26.223416582Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:26.292193 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:26.292446 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:26.292643 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:56:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:26.292674 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:56:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:31.235869675Z" level=info msg="NetworkStart: stopping network for sandbox 3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:31.235990855Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/1783dab3-8b8c-4150-ae8c-c67c534855f2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:56:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:31.236019249Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:56:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:31.236029254Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:56:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:31.236036885Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:56:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:34.872607 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:56:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:34.872683 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:56:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:39.234532822Z" level=info msg="NetworkStart: stopping network for sandbox 8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:39.234653725Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/2e3f2600-1d4a-4586-a789-fc7c030ae615 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:56:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:39.234682341Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:56:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:39.234692809Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:56:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:39.234702535Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:56:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:40.233720677Z" level=info msg="NetworkStart: stopping network for sandbox c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:40.233835936Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/9cf3dc2e-ed7e-4dc6-900f-dc2b5bb4b329 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:56:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:40.233865757Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:56:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:40.233876716Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:56:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:40.233885770Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:56:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:44.872004 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:56:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:44.872068 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:56:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:54.872052 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:56:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:56:54.872116 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:56.292490 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:56.292752 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:56.292969 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:56:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:56.293035 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.246637276Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.246691682Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 systemd[1]: run-utsns-ab2bdf87\x2de765\x2d4bcb\x2db9cb\x2d8e063ae24613.mount: Deactivated successfully. Feb 23 19:56:59 ip-10-0-136-68 systemd[1]: run-ipcns-ab2bdf87\x2de765\x2d4bcb\x2db9cb\x2d8e063ae24613.mount: Deactivated successfully. Feb 23 19:56:59 ip-10-0-136-68 systemd[1]: run-netns-ab2bdf87\x2de765\x2d4bcb\x2db9cb\x2d8e063ae24613.mount: Deactivated successfully. Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.271334466Z" level=info msg="runSandbox: deleting pod ID afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6 from idIndex" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.271377191Z" level=info msg="runSandbox: removing pod sandbox afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.271423881Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.271441277Z" level=info msg="runSandbox: unmounting shmPath for sandbox afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6-userdata-shm.mount: Deactivated successfully. Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.283307264Z" level=info msg="runSandbox: removing pod sandbox from storage: afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.284931773Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:56:59.284961808Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0ad043e6-7672-4248-8290-730023e28a2c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:59.285172 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:59.285232 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:59.285295 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:56:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:56:59.285355 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(afa6627e30bcf2ebd5336f6b516aca1ccfaa7acfc3b164c3779130ffae0f07b6): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:57:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:04.872095 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:57:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:04.872152 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:57:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:04.872180 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:57:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:04.872741 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:57:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:04.872906 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068" gracePeriod=30 Feb 23 19:57:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:04.873137123Z" level=info msg="Stopping container: 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068 (timeout: 30s)" id=bc1efd7d-63b5-40ac-b916-c31563ba4eb9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:57:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:06.216918 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:06.217193 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:06.218087 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:06.218145 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:57:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:08.635147205Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=bc1efd7d-63b5-40ac-b916-c31563ba4eb9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:57:08 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-b3388fe6ba339a9b780627eb9d2119b456b161db51efec3ffee2f354075dcf65-merged.mount: Deactivated successfully. Feb 23 19:57:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:10.217007 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:57:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:10.217452108Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:10.217505713Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:57:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:10.223158787Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3909417d-62d8-42d2-97b1-a148b2049bbf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:57:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:10.223301137Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:11.235091483Z" level=info msg="NetworkStart: stopping network for sandbox 859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:11.235212763Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/41fdc6bd-395f-483c-bd2c-434753c138c2 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:11.235240666Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:11.235283030Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:57:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:11.235291538Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.415034699Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=bc1efd7d-63b5-40ac-b916-c31563ba4eb9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.416953219Z" level=info msg="Stopped container 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=bc1efd7d-63b5-40ac-b916-c31563ba4eb9 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.417681752Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=9af83097-7233-4d65-9f46-a19b603b4518 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.417841397Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=9af83097-7233-4d65-9f46-a19b603b4518 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.418489212Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=a3d8d100-5b3e-458a-9b36-21fb9c401e24 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.418643673Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=a3d8d100-5b3e-458a-9b36-21fb9c401e24 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.419299059Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c08f4350-e60e-49c0-b77e-c14e3e9efbac name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.419410941Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:57:12 ip-10-0-136-68 systemd[1]: Started crio-conmon-8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22.scope. Feb 23 19:57:12 ip-10-0-136-68 systemd[1]: Started libcontainer container 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22. Feb 23 19:57:12 ip-10-0-136-68 conmon[20036]: conmon 8d328bf96ae109fd8fa5 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:57:12 ip-10-0-136-68 systemd[1]: crio-conmon-8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22.scope: Deactivated successfully. Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.539144845Z" level=info msg="Created container 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c08f4350-e60e-49c0-b77e-c14e3e9efbac name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.539593876Z" level=info msg="Starting container: 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22" id=8f8b048c-869e-47c4-b2f3-3644ed733e0b name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:57:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:12.546572637Z" level=info msg="Started container" PID=20048 containerID=8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=8f8b048c-869e-47c4-b2f3-3644ed733e0b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:57:12 ip-10-0-136-68 systemd[1]: crio-8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22.scope: Deactivated successfully. Feb 23 19:57:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:13.064646285Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=619fb261-cfa4-4946-8f92-0c9baf0f06f7 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.245748469Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.245801439Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 systemd[1]: run-utsns-1783dab3\x2d8b8c\x2d4150\x2dae8c\x2dc67c534855f2.mount: Deactivated successfully. Feb 23 19:57:16 ip-10-0-136-68 systemd[1]: run-ipcns-1783dab3\x2d8b8c\x2d4150\x2dae8c\x2dc67c534855f2.mount: Deactivated successfully. Feb 23 19:57:16 ip-10-0-136-68 systemd[1]: run-netns-1783dab3\x2d8b8c\x2d4150\x2dae8c\x2dc67c534855f2.mount: Deactivated successfully. Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.268325852Z" level=info msg="runSandbox: deleting pod ID 3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61 from idIndex" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.268365856Z" level=info msg="runSandbox: removing pod sandbox 3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.268391850Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.268404236Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61-userdata-shm.mount: Deactivated successfully. Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.273324239Z" level=info msg="runSandbox: removing pod sandbox from storage: 3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.274948661Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.274980853Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=c067c9c7-d31a-44e5-b035-01e8e671c7e3 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:16.275185 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:57:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:16.275434 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:57:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:16.275477 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:57:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:16.275556 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(3dac00c15137619fa2c56be9b0a3a1b264995d277b95195f698c53464386df61): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.309080530Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=8f8b048c-869e-47c4-b2f3-3644ed733e0b name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:57:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:16.829936601Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=b38eee88-7e17-4d2b-b588-bbf620930b99 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:16.830884 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068" exitCode=-1 Feb 23 19:57:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:16.830915 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068} Feb 23 19:57:16 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:16.830952 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:57:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:20.591290022Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=e173c14e-1e5c-4720-94cc-48aa038981e6 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:21.595059517Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=9023114d-686e-4ef7-b7e6-08a756506f34 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.243437630Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.243483913Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 systemd[1]: run-utsns-2e3f2600\x2d1d4a\x2d4586\x2da789\x2dfc7c030ae615.mount: Deactivated successfully. Feb 23 19:57:24 ip-10-0-136-68 systemd[1]: run-ipcns-2e3f2600\x2d1d4a\x2d4586\x2da789\x2dfc7c030ae615.mount: Deactivated successfully. Feb 23 19:57:24 ip-10-0-136-68 systemd[1]: run-netns-2e3f2600\x2d1d4a\x2d4586\x2da789\x2dfc7c030ae615.mount: Deactivated successfully. Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.279377204Z" level=info msg="runSandbox: deleting pod ID 8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d from idIndex" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.279417935Z" level=info msg="runSandbox: removing pod sandbox 8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.279443275Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.279455406Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d-userdata-shm.mount: Deactivated successfully. Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.288317463Z" level=info msg="runSandbox: removing pod sandbox from storage: 8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.290031262Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.290063156Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=4b9f1e43-a8a1-4545-b230-3c20e51d1526 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:24.290284 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:57:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:24.290355 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:57:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:24.290385 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:57:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:24.290438 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(8ae23a8c533bbcee5aa4d0fa18e144ca010f29213883be86e5f68d5be423ac8d): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.352913426Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=0f666e62-5584-4830-8f47-6d882a0d8c17 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:24.353434582Z" level=info msg="Removing container: 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" id=cd1585e9-e23f-4a73-9014-272371e3c876 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.244178856Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.244230250Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 systemd[1]: run-utsns-9cf3dc2e\x2ded7e\x2d4dc6\x2d900f\x2ddc2b5bb4b329.mount: Deactivated successfully. Feb 23 19:57:25 ip-10-0-136-68 systemd[1]: run-ipcns-9cf3dc2e\x2ded7e\x2d4dc6\x2d900f\x2ddc2b5bb4b329.mount: Deactivated successfully. Feb 23 19:57:25 ip-10-0-136-68 systemd[1]: run-netns-9cf3dc2e\x2ded7e\x2d4dc6\x2d900f\x2ddc2b5bb4b329.mount: Deactivated successfully. Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.264332469Z" level=info msg="runSandbox: deleting pod ID c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66 from idIndex" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.264372514Z" level=info msg="runSandbox: removing pod sandbox c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.264405245Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.264418398Z" level=info msg="runSandbox: unmounting shmPath for sandbox c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66-userdata-shm.mount: Deactivated successfully. Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.270325282Z" level=info msg="runSandbox: removing pod sandbox from storage: c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.271847336Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.271877014Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=b29ab78b-0e69-4c3d-9144-0a93f924785d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:25.272114 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:57:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:25.272178 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:57:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:25.272203 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:57:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:25.272313 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c699079469d766cef89d8ddc225bfd803da5cdc1d5ddfca81c894ace73c21e66): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:57:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:25.344090149Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=48e5e068-dda9-4380-9974-30871258b35c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:26.292124 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:26.292460 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:26.292724 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:26.292754 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:57:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:28.102029311Z" level=warning msg="Failed to find container exit file for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: timed out waiting for the condition" id=cd1585e9-e23f-4a73-9014-272371e3c876 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:57:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:28.115061086Z" level=info msg="Removed container 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=cd1585e9-e23f-4a73-9014-272371e3c876 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.080877264Z" level=error msg="Failed to update container state for 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: `/usr/bin/runc --root /run/runc --systemd-cgroup state 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc` failed: time=\"2023-02-23T19:57:29Z\" level=error msg=\"container does not exist\"\n : exit status 1" id=93cd18db-a6eb-4c2a-a5a5-3d9a6d507011 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.080930782Z" level=warning msg="Failed to UpdateStatus of container 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc: state command returned nil" id=93cd18db-a6eb-4c2a-a5a5-3d9a6d507011 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:29.081851 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22" exitCode=-1 Feb 23 19:57:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:29.081892 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22} Feb 23 19:57:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:29.081919 2199 scope.go:115] "RemoveContainer" containerID="9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068" Feb 23 19:57:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:29.082284 2199 scope.go:115] "RemoveContainer" containerID="8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22" Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.082875081Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=c04559a1-f729-4006-964c-c111f812d387 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.083059430Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c04559a1-f729-4006-964c-c111f812d387 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.083674220Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=22d6e8ba-7336-4fb3-9f3c-726dd7cab6dd name=/runtime.v1.ImageService/ImageStatus Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.083834123Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=22d6e8ba-7336-4fb3-9f3c-726dd7cab6dd name=/runtime.v1.ImageService/ImageStatus Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.084616899Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=0e6177db-51f6-4bb6-a301-345f698e3a5d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.084732795Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:57:29 ip-10-0-136-68 systemd[1]: Started crio-conmon-c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723.scope. Feb 23 19:57:29 ip-10-0-136-68 systemd[1]: Started libcontainer container c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723. Feb 23 19:57:29 ip-10-0-136-68 conmon[20200]: conmon c6bda21faaa445a57f89 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:57:29 ip-10-0-136-68 systemd[1]: crio-conmon-c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723.scope: Deactivated successfully. Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.205169679Z" level=info msg="Created container c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=0e6177db-51f6-4bb6-a301-345f698e3a5d name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.205602441Z" level=info msg="Starting container: c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723" id=104a3569-d508-466f-880b-518adb6d7707 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:57:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:29.224686077Z" level=info msg="Started container" PID=20212 containerID=c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=104a3569-d508-466f-880b-518adb6d7707 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:57:29 ip-10-0-136-68 systemd[1]: crio-c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723.scope: Deactivated successfully. Feb 23 19:57:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:31.216590 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:57:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:31.217081670Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:31.217140822Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:57:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:31.222477865Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c5843629-5f04-48a6-9863-ac947a06e954 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:57:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:31.222500309Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:57:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:32.834083336Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=4a045428-8dd3-45eb-956e-2a6881a5bf06 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:33.822518376Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=e49af0a4-0e7e-4052-b543-9066645fa3de name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:34.872133 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:57:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:36.597035928Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=01f14b5f-9dcc-46b1-88e2-2dba06f1fa86 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:36.597562193Z" level=info msg="Removing container: 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068" id=0caf58f7-a1c9-4179-bb0e-be698bd697c4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:57:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:37.584221210Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=fe7100d5-a579-4250-9fed-e233d034871b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:37.585145 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723} Feb 23 19:57:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:37.585454 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:57:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:37.585757868Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:37.585868708Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:57:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:37.591483973Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/348f9bcc-79a8-45fe-8dc5-78c80095d9bc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:57:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:37.591536933Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:57:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:39.144115426Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=5f10c325-5b47-47a8-a4b2-173d92e88b72 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:39.144483 2199 kuberuntime_gc.go:390] "Failed to remove container log dead symlink" err="remove /var/log/containers/aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers_csi-driver-9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068.log: no such file or directory" path="/var/log/containers/aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers_csi-driver-9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068.log" Feb 23 19:57:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:40.216998 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:57:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:40.217419423Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:40.217474364Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:57:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:40.223283902Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5d2635ac-b5fb-41b1-b00b-c494f3b43d4f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:57:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:40.223311387Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:57:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:40.346908224Z" level=warning msg="Failed to find container exit file for 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: timed out waiting for the condition" id=0caf58f7-a1c9-4179-bb0e-be698bd697c4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:57:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:40.371584687Z" level=info msg="Removed container 9f5e832f8b09b61288918dd84f7ea7a7f570680ed1b95be2f2f107ec4afcb068: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=0caf58f7-a1c9-4179-bb0e-be698bd697c4 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:57:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:40.371817 2199 scope.go:115] "RemoveContainer" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:57:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:40.372119 2199 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc\": container with ID starting with 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc not found: ID does not exist" containerID="2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc" Feb 23 19:57:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:40.372156 2199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc} err="failed to get container status \"2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc\": rpc error: code = NotFound desc = could not find container \"2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc\": container with ID starting with 2fbba05fe01cc81ad257936e71019a62bd22b8d2439da5da04a3c92086ea5fcc not found: ID does not exist" Feb 23 19:57:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:44.339088697Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=a49e5573-a87c-4712-a7d8-05440cbc23e3 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:57:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:44.872122 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:57:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:44.872188 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:57:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:54.872085 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:57:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:57:54.872142 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:57:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:55.236832747Z" level=info msg="NetworkStart: stopping network for sandbox 2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:55.236946671Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3909417d-62d8-42d2-97b1-a148b2049bbf Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:57:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:55.236978844Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:57:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:55.236991698Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:57:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:55.236999686Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.245418610Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.245471068Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 systemd[1]: run-utsns-41fdc6bd\x2d395f\x2d483c\x2dbd2c\x2d434753c138c2.mount: Deactivated successfully. Feb 23 19:57:56 ip-10-0-136-68 systemd[1]: run-ipcns-41fdc6bd\x2d395f\x2d483c\x2dbd2c\x2d434753c138c2.mount: Deactivated successfully. Feb 23 19:57:56 ip-10-0-136-68 systemd[1]: run-netns-41fdc6bd\x2d395f\x2d483c\x2dbd2c\x2d434753c138c2.mount: Deactivated successfully. Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.278340809Z" level=info msg="runSandbox: deleting pod ID 859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622 from idIndex" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.278379910Z" level=info msg="runSandbox: removing pod sandbox 859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.278418139Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.278432911Z" level=info msg="runSandbox: unmounting shmPath for sandbox 859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622-userdata-shm.mount: Deactivated successfully. Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.284316337Z" level=info msg="runSandbox: removing pod sandbox from storage: 859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.286271881Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:57:56.286308994Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=22498906-7361-426e-96cd-8a5bac6e9340 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:56.286493 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:56.286547 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:56.286569 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:56.286629 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(859da5a577e94fe3f8874e31e38ce669d692e59c3eb236a998e5bdd99feb0622): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:56.292021 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:56.292237 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:56.292437 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:57:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:57:56.292482 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:58:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:04.872425 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:58:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:04.872485 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:58:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:11.217480 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:58:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:11.217947793Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:11.218021603Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:58:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:11.223350677Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/321e87fc-ceaf-4931-a65e-58c331aaed1b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:58:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:11.223388808Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:58:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:14.872640 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:58:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:14.872704 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:58:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:16.234896646Z" level=info msg="NetworkStart: stopping network for sandbox 2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:16.235000919Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/c5843629-5f04-48a6-9863-ac947a06e954 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:58:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:16.235030859Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:58:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:16.235038283Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:58:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:16.235044506Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:58:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:22.603769022Z" level=info msg="NetworkStart: stopping network for sandbox 6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:22.603916690Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/348f9bcc-79a8-45fe-8dc5-78c80095d9bc Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:58:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:22.603957191Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:58:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:22.603968383Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:58:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:22.603978902Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:58:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:24.872222 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:58:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:24.872310 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:58:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:24.872340 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:58:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:24.872828 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:58:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:24.872998 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723" gracePeriod=30 Feb 23 19:58:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:24.873227735Z" level=info msg="Stopping container: c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723 (timeout: 30s)" id=c7c94936-41d2-4468-95dc-049aeca90ae8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:58:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:25.237023591Z" level=info msg="NetworkStart: stopping network for sandbox 9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:25.237150204Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/5d2635ac-b5fb-41b1-b00b-c494f3b43d4f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:58:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:25.237178294Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:58:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:25.237186438Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:58:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:25.237196229Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:26.292377 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:26.292699 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:26.292940 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:26.292970 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:58:28 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:28.635174162Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=c7c94936-41d2-4468-95dc-049aeca90ae8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:58:28 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-8bc89789b1d36b3eef407198166406a5286eca39ed7a34fe54bc2289e62d12a3-merged.mount: Deactivated successfully. Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.411975541Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=c7c94936-41d2-4468-95dc-049aeca90ae8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.413881624Z" level=info msg="Stopped container c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c7c94936-41d2-4468-95dc-049aeca90ae8 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.414585056Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=1f5b5582-8a90-43b3-a67e-6f2a174d5bb4 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.414743254Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=1f5b5582-8a90-43b3-a67e-6f2a174d5bb4 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.415317592Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=f3a506a3-3868-4cd8-b8dd-cb249b947293 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.415476062Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=f3a506a3-3868-4cd8-b8dd-cb249b947293 name=/runtime.v1.ImageService/ImageStatus Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.416101830Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=de7d8e93-bfe6-4ed1-934f-0db95a837c44 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.416205179Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:58:32 ip-10-0-136-68 systemd[1]: Started crio-conmon-fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543.scope. Feb 23 19:58:32 ip-10-0-136-68 systemd[1]: Started libcontainer container fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543. Feb 23 19:58:32 ip-10-0-136-68 conmon[20446]: conmon fbeddbbb8be668d9a8b2 : Failed to write to cgroup.event_control Operation not supported Feb 23 19:58:32 ip-10-0-136-68 systemd[1]: crio-conmon-fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543.scope: Deactivated successfully. Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.562291918Z" level=info msg="Created container fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=de7d8e93-bfe6-4ed1-934f-0db95a837c44 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.562812149Z" level=info msg="Starting container: fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" id=54a99821-54f9-4693-bf2e-157ca9593fd9 name=/runtime.v1.RuntimeService/StartContainer Feb 23 19:58:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:32.570477196Z" level=info msg="Started container" PID=20458 containerID=fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=54a99821-54f9-4693-bf2e-157ca9593fd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 19:58:32 ip-10-0-136-68 systemd[1]: crio-fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543.scope: Deactivated successfully. Feb 23 19:58:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:33.157081374Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=5732cb39-0972-4cf1-ae83-d5d81bed89d5 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:58:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:36.216905 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:36.217183 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:36.217446 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:36.217497 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:58:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:36.907031122Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=0488fecc-3abe-4075-ac5b-8d86ea422f31 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:58:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:36.907988 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723" exitCode=-1 Feb 23 19:58:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:36.908041 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723} Feb 23 19:58:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:36.908079 2199 scope.go:115] "RemoveContainer" containerID="8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22" Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.247397385Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.247446878Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 systemd[1]: run-utsns-3909417d\x2d62d8\x2d42d2\x2d97b1\x2da148b2049bbf.mount: Deactivated successfully. Feb 23 19:58:40 ip-10-0-136-68 systemd[1]: run-ipcns-3909417d\x2d62d8\x2d42d2\x2d97b1\x2da148b2049bbf.mount: Deactivated successfully. Feb 23 19:58:40 ip-10-0-136-68 systemd[1]: run-netns-3909417d\x2d62d8\x2d42d2\x2d97b1\x2da148b2049bbf.mount: Deactivated successfully. Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.262320621Z" level=info msg="runSandbox: deleting pod ID 2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c from idIndex" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.262356976Z" level=info msg="runSandbox: removing pod sandbox 2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.262392655Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.262413626Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c-userdata-shm.mount: Deactivated successfully. Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.272302072Z" level=info msg="runSandbox: removing pod sandbox from storage: 2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.273974203Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.274005648Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=d8e884cd-44c2-4e6f-84c7-cf062b52b3f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:40.274229 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:58:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:40.274326 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:58:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:40.274367 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:58:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:40.274462 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(2100c96ea66610d78cc37ab93c6d16aa36152691233451c7641d074caec1be7c): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 19:58:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:40.657994255Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=d84de9c2-2768-4891-9391-c05b77d3ec0c name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:58:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:41.673124061Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=89d89263-089e-4741-8c80-4784b8210c62 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:44.418966932Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=6f00a53d-c96c-40d5-9a4c-ca37965a14ea name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:58:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:44.419579491Z" level=info msg="Removing container: 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22" id=02db472b-177c-4ded-a88c-9040ec7f5164 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:58:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:45.423011256Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=fb4874fd-dad9-4e56-a0d4-c80cfaa8451b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:58:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:45.423971 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543} Feb 23 19:58:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:48.178059594Z" level=warning msg="Failed to find container exit file for 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: timed out waiting for the condition" id=02db472b-177c-4ded-a88c-9040ec7f5164 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:58:48 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1bab9d8a752dba383a4c85e8a5f143ee3c298a12aae9fc8d42284a02ec809b23-merged.mount: Deactivated successfully. Feb 23 19:58:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:48.219783234Z" level=info msg="Removed container 8d328bf96ae109fd8fa57e0e5741495c6b02ab8ffa969733369a42b7cd13ab22: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=02db472b-177c-4ded-a88c-9040ec7f5164 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:58:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:52.179152618Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=99a92845-cb58-4e74-8be8-34efd094853b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:58:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:54.872896 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:58:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:54.872951 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:58:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:58:55.216557 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 19:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:55.216846138Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:55.216908801Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:55.222078549Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/dde3fb78-9c1b-4cb1-bf45-529094795cf8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:58:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:55.222105358Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:56.234879492Z" level=info msg="NetworkStart: stopping network for sandbox 84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:56.234996995Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/321e87fc-ceaf-4931-a65e-58c331aaed1b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:56.235033599Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:56.235044395Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:58:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:58:56.235055289Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:56.292688 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:56.292905 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:56.293106 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:58:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:58:56.293128 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.244632505Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.244690444Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 systemd[1]: run-utsns-c5843629\x2d5f04\x2d48a6\x2d9863\x2dac947a06e954.mount: Deactivated successfully. Feb 23 19:59:01 ip-10-0-136-68 systemd[1]: run-ipcns-c5843629\x2d5f04\x2d48a6\x2d9863\x2dac947a06e954.mount: Deactivated successfully. Feb 23 19:59:01 ip-10-0-136-68 systemd[1]: run-netns-c5843629\x2d5f04\x2d48a6\x2d9863\x2dac947a06e954.mount: Deactivated successfully. Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.274339238Z" level=info msg="runSandbox: deleting pod ID 2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a from idIndex" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.274376690Z" level=info msg="runSandbox: removing pod sandbox 2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.274420338Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.274435684Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a-userdata-shm.mount: Deactivated successfully. Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.278310598Z" level=info msg="runSandbox: removing pod sandbox from storage: 2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.279919873Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:01.279952873Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=81ef1b30-2188-4254-804e-0ea1199300c4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:01.280162 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:59:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:01.280221 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:59:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:01.280283 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:59:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:01.280347 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(2a5e30b6896cbb182a23abc8cbe5133fab2f965eaffc858087b98a0da60d361a): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 19:59:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:04.872548 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:59:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:04.872610 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.614115424Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.614170757Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 systemd[1]: run-utsns-348f9bcc\x2d79a8\x2d45fe\x2d8dc5\x2d78c80095d9bc.mount: Deactivated successfully. Feb 23 19:59:07 ip-10-0-136-68 systemd[1]: run-ipcns-348f9bcc\x2d79a8\x2d45fe\x2d8dc5\x2d78c80095d9bc.mount: Deactivated successfully. Feb 23 19:59:07 ip-10-0-136-68 systemd[1]: run-netns-348f9bcc\x2d79a8\x2d45fe\x2d8dc5\x2d78c80095d9bc.mount: Deactivated successfully. Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.646321910Z" level=info msg="runSandbox: deleting pod ID 6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35 from idIndex" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.646370934Z" level=info msg="runSandbox: removing pod sandbox 6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.646421655Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.646445002Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35-userdata-shm.mount: Deactivated successfully. Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.651311724Z" level=info msg="runSandbox: removing pod sandbox from storage: 6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.653141083Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:07.653171180Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=b5aade1a-5ac3-4848-8bab-0e730a6ba4a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:07.653393 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:59:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:07.653568 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:59:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:07.653604 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 19:59:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:07.653666 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(6cda1c9de3fc894229fc74fe9d3a23514e6bb965ec98bf1c775173c7cd3b3c35): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.247398056Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.247447158Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 systemd[1]: run-utsns-5d2635ac\x2db5fb\x2d41b1\x2db00b\x2dc494f3b43d4f.mount: Deactivated successfully. Feb 23 19:59:10 ip-10-0-136-68 systemd[1]: run-ipcns-5d2635ac\x2db5fb\x2d41b1\x2db00b\x2dc494f3b43d4f.mount: Deactivated successfully. Feb 23 19:59:10 ip-10-0-136-68 systemd[1]: run-netns-5d2635ac\x2db5fb\x2d41b1\x2db00b\x2dc494f3b43d4f.mount: Deactivated successfully. Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.275324553Z" level=info msg="runSandbox: deleting pod ID 9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c from idIndex" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.275361047Z" level=info msg="runSandbox: removing pod sandbox 9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.275405689Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.275427137Z" level=info msg="runSandbox: unmounting shmPath for sandbox 9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c-userdata-shm.mount: Deactivated successfully. Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.281317209Z" level=info msg="runSandbox: removing pod sandbox from storage: 9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.282800731Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:10.282834311Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=7cc358ce-95c1-4a1a-995a-ab7df807086e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:10.283009 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:59:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:10.283057 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:59:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:10.283079 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:59:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:10.283137 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(9b8199322dbdd7e56ac26b101c3460481a2b50eee59a1a614751fb2ce624b97c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 19:59:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:14.217099 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 19:59:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:14.217445382Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:14.217505837Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:59:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:14.223268312Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/81670747-15fd-430b-9a84-4dd2cc34b4c3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:59:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:14.223297505Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:59:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:14.872662 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:59:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:14.872723 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:59:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:20.217548 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 19:59:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:20.217995145Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:20.218074254Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:59:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:20.224300199Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/3d93dfd5-eba2-4456-8f92-a1a23c90d1c0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:59:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:20.224328196Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:59:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:21.217137 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 19:59:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:21.217546355Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:21.217629437Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:59:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:21.222643981Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/161a3492-8159-457d-a475-be7891c690e0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:59:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:21.222670205Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:59:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:24.872612 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:59:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:24.872678 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:26.292123 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:26.292443 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:26.292737 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:26.292777 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:34.872320 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 19:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:34.872369 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 19:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:34.872399 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 19:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:34.872876 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 19:59:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:34.873031 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" gracePeriod=30 Feb 23 19:59:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:34.873279178Z" level=info msg="Stopping container: fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543 (timeout: 30s)" id=0a9118b2-f639-4480-bcea-a8bd70eeb621 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:59:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:38.633967618Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=0a9118b2-f639-4480-bcea-a8bd70eeb621 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:59:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-3d07e374df6a26ce8afdad78fba66ce0d97fe401bfb79629408b8cda62437d4f-merged.mount: Deactivated successfully. Feb 23 19:59:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:40.235910775Z" level=info msg="NetworkStart: stopping network for sandbox 8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:40.236017882Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/dde3fb78-9c1b-4cb1-bf45-529094795cf8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:59:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:40.236051366Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:59:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:40.236061649Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:59:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:40.236067623Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.244278010Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.244326267Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 systemd[1]: run-utsns-321e87fc\x2dceaf\x2d4931\x2da65e\x2d58c331aaed1b.mount: Deactivated successfully. Feb 23 19:59:41 ip-10-0-136-68 systemd[1]: run-ipcns-321e87fc\x2dceaf\x2d4931\x2da65e\x2d58c331aaed1b.mount: Deactivated successfully. Feb 23 19:59:41 ip-10-0-136-68 systemd[1]: run-netns-321e87fc\x2dceaf\x2d4931\x2da65e\x2d58c331aaed1b.mount: Deactivated successfully. Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.271327122Z" level=info msg="runSandbox: deleting pod ID 84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39 from idIndex" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.271369695Z" level=info msg="runSandbox: removing pod sandbox 84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.271418462Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.271438656Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39-userdata-shm.mount: Deactivated successfully. Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.278306946Z" level=info msg="runSandbox: removing pod sandbox from storage: 84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.279736265Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:41.279763786Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=dc822ed2-50c4-42f0-a884-46ae7184436d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:41.280094 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 19:59:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:41.280149 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:59:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:41.280171 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:59:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:41.280233 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(84626585c61dcad9da47c002e5f4894725e0ce3f1d986e880915ed01e2c39a39): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 19:59:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:42.412099421Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=0a9118b2-f639-4480-bcea-a8bd70eeb621 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:59:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:42.413852848Z" level=info msg="Stopped container fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=0a9118b2-f639-4480-bcea-a8bd70eeb621 name=/runtime.v1.RuntimeService/StopContainer Feb 23 19:59:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:42.414394 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:59:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:42.996793830Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=3e2bf7ce-ec11-4f82-87f8-aff61fa00525 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:59:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:46.746901980Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=035cf973-936c-429c-aa95-a5efd2c3d972 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:59:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:46.747812 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" exitCode=-1 Feb 23 19:59:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:46.747846 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543} Feb 23 19:59:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:46.747874 2199 scope.go:115] "RemoveContainer" containerID="c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723" Feb 23 19:59:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:47.749376 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 19:59:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:47.749764 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 19:59:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:50.508235212Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=ddd7211e-8129-4a20-981e-b41dc6935f9e name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:59:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:53.217716 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:53.218005 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:53.218320 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:53.218366 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:59:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:54.269058501Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=c2781329-0e46-4b13-ad96-5ea53ca1465a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 19:59:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:54.269541869Z" level=info msg="Removing container: c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723" id=4e89fcfb-7d40-4ba7-9fe3-ae849e717a55 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:59:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 19:59:56.217160 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 19:59:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:56.217644723Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:56.217715389Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 19:59:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:56.223598776Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/86181247-ba3e-4540-a2e3-7b1491a6a523 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:59:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:56.223633287Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 19:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:56.292705 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:56.293010 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:56.293272 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 19:59:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 19:59:56.293297 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 19:59:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:58.019307364Z" level=warning msg="Failed to find container exit file for c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: timed out waiting for the condition" id=4e89fcfb-7d40-4ba7-9fe3-ae849e717a55 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:59:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:58.032312321Z" level=info msg="Removed container c6bda21faaa445a57f899f904f1023e95b489765b2f119c987c014e7fa428723: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=4e89fcfb-7d40-4ba7-9fe3-ae849e717a55 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 19:59:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:59.234649202Z" level=info msg="NetworkStart: stopping network for sandbox 712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 19:59:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:59.234789598Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/81670747-15fd-430b-9a84-4dd2cc34b4c3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 19:59:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:59.234829052Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 19:59:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:59.234842274Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 19:59:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 19:59:59.234852100Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:00:00 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:00.217433 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:00:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:00.218036 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:00:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:02.527964168Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=8f0f2751-db31-427e-9f04-e0db4909a662 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:00:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:05.235691873Z" level=info msg="NetworkStart: stopping network for sandbox 0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:05.235830709Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/3d93dfd5-eba2-4456-8f92-a1a23c90d1c0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:00:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:05.235870297Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:00:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:05.235882942Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:00:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:05.235895412Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:00:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:06.234282604Z" level=info msg="NetworkStart: stopping network for sandbox e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:06.234404268Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/161a3492-8159-457d-a475-be7891c690e0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:00:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:06.234437544Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:00:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:06.234446259Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:00:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:06.234452705Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:00:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:15.216774 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:00:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:15.217161 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:00:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:20.254775421Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=82b1ecf5-3fc1-448e-8951-fbd3c454ca6f name=/runtime.v1.ImageService/ImageStatus Feb 23 20:00:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:20.254986927Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=82b1ecf5-3fc1-448e-8951-fbd3c454ca6f name=/runtime.v1.ImageService/ImageStatus Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.245935779Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.245985899Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 systemd[1]: run-utsns-dde3fb78\x2d9c1b\x2d4cb1\x2dbf45\x2d529094795cf8.mount: Deactivated successfully. Feb 23 20:00:25 ip-10-0-136-68 systemd[1]: run-ipcns-dde3fb78\x2d9c1b\x2d4cb1\x2dbf45\x2d529094795cf8.mount: Deactivated successfully. Feb 23 20:00:25 ip-10-0-136-68 systemd[1]: run-netns-dde3fb78\x2d9c1b\x2d4cb1\x2dbf45\x2d529094795cf8.mount: Deactivated successfully. Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.264351018Z" level=info msg="runSandbox: deleting pod ID 8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f from idIndex" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.264392732Z" level=info msg="runSandbox: removing pod sandbox 8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.264437547Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.264452711Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f-userdata-shm.mount: Deactivated successfully. Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.269307840Z" level=info msg="runSandbox: removing pod sandbox from storage: 8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.270864387Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:25.270893440Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=68e0f729-7061-4b22-ab81-d04188f7685a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:25.271131 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:00:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:25.271190 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:00:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:25.271217 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:00:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:25.271318 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8655287285e9348fe4a7ec370a128695921e8ed38f2a94eb850db91fa77dc97f): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:26.292605 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:26.292887 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:26.293098 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:26.293126 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:00:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:27.217146 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:00:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:27.217644 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:00:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:39.216372 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:00:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:39.216741501Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:39.216795926Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:00:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:39.221826925Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/51c7be9f-fa19-495d-a0aa-4cb0f17f2ad7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:00:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:39.221854575Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:00:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:40.217896 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:00:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:40.218493 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:41.236217458Z" level=info msg="NetworkStart: stopping network for sandbox 1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:41.236369728Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/86181247-ba3e-4540-a2e3-7b1491a6a523 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:41.236401632Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:41.236409938Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:00:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:41.236417004Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.243751111Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.243801492Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 systemd[1]: run-utsns-81670747\x2d15fd\x2d430b\x2d9a84\x2d4dd2cc34b4c3.mount: Deactivated successfully. Feb 23 20:00:44 ip-10-0-136-68 systemd[1]: run-ipcns-81670747\x2d15fd\x2d430b\x2d9a84\x2d4dd2cc34b4c3.mount: Deactivated successfully. Feb 23 20:00:44 ip-10-0-136-68 systemd[1]: run-netns-81670747\x2d15fd\x2d430b\x2d9a84\x2d4dd2cc34b4c3.mount: Deactivated successfully. Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.272333202Z" level=info msg="runSandbox: deleting pod ID 712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3 from idIndex" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.272368592Z" level=info msg="runSandbox: removing pod sandbox 712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.272397274Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.272422627Z" level=info msg="runSandbox: unmounting shmPath for sandbox 712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3-userdata-shm.mount: Deactivated successfully. Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.279301976Z" level=info msg="runSandbox: removing pod sandbox from storage: 712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.280840934Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:44.280868884Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=215bfb76-58db-4358-b8ff-5c03b5c0db13 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:44.281050 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:00:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:44.281099 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:00:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:44.281124 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:00:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:44.281178 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(712ad076157cf1b485af57a68b57f70f7cb95ebe9abd299ce794a6b091fff3f3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:00:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:45.315097 2199 log.go:198] http: TLS handshake error from 10.0.216.117:55702: EOF Feb 23 20:00:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:45.536454 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-hstcm_0268b68d-53b2-454a-a03b-37bd38d269bc/dns-node-resolver/1.log" Feb 23 20:00:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:48.084885 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-wsg6f_bd2da6fb-b383-40fe-a3ad-b6436a02985b/node-ca/1.log" Feb 23 20:00:48 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:48.978463 2199 log.go:198] http: TLS handshake error from 10.0.216.117:55718: EOF Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.246107790Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.246156794Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 systemd[1]: run-utsns-3d93dfd5\x2deba2\x2d4456\x2d8f92\x2da1a23c90d1c0.mount: Deactivated successfully. Feb 23 20:00:50 ip-10-0-136-68 systemd[1]: run-ipcns-3d93dfd5\x2deba2\x2d4456\x2d8f92\x2da1a23c90d1c0.mount: Deactivated successfully. Feb 23 20:00:50 ip-10-0-136-68 systemd[1]: run-netns-3d93dfd5\x2deba2\x2d4456\x2d8f92\x2da1a23c90d1c0.mount: Deactivated successfully. Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.278337942Z" level=info msg="runSandbox: deleting pod ID 0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352 from idIndex" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.278382063Z" level=info msg="runSandbox: removing pod sandbox 0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.278433744Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.278455186Z" level=info msg="runSandbox: unmounting shmPath for sandbox 0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352-userdata-shm.mount: Deactivated successfully. Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.284309848Z" level=info msg="runSandbox: removing pod sandbox from storage: 0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.285913746Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:50.285943367Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=6a696f00-62c4-4de8-9c95-e4fac5e44820 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:50.286157 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:00:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:50.286218 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:00:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:50.286262 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:00:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:50.286329 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(0fe9f1da049cecaeb3882ca4b9a1dcf9e0121584ce9aca859c22c782333f3352): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:00:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:50.610174 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-nt8h7_3e3e7655-5c60-4995-9a23-b32843026a6e/init-textfile/1.log" Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.243759834Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.243811344Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 systemd[1]: run-utsns-161a3492\x2d8159\x2d457d\x2da475\x2dbe7891c690e0.mount: Deactivated successfully. Feb 23 20:00:51 ip-10-0-136-68 systemd[1]: run-ipcns-161a3492\x2d8159\x2d457d\x2da475\x2dbe7891c690e0.mount: Deactivated successfully. Feb 23 20:00:51 ip-10-0-136-68 systemd[1]: run-netns-161a3492\x2d8159\x2d457d\x2da475\x2dbe7891c690e0.mount: Deactivated successfully. Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.275351632Z" level=info msg="runSandbox: deleting pod ID e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a from idIndex" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.275392077Z" level=info msg="runSandbox: removing pod sandbox e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.275426385Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.275439489Z" level=info msg="runSandbox: unmounting shmPath for sandbox e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a-userdata-shm.mount: Deactivated successfully. Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.283339103Z" level=info msg="runSandbox: removing pod sandbox from storage: e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.284837499Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:51.284866968Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=9ba7526c-cf7e-45e5-b9e7-07e70af0b112 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:51.285090 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:00:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:51.285152 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:00:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:51.285174 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:00:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:51.285238 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(e99af7d12964952381631ccf60720f6dec960c61cc48e608d6d043c8da665b0a): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:00:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:53.001576 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-zzwb5_a5ccef55-3f5c-4ffc-82f9-586324e62a37/tuned/1.log" Feb 23 20:00:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:53.216862 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:00:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:53.217342 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:55.217027 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:00:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:55.217506249Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:00:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:55.217580280Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:00:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:55.222854649Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/fc82ae67-a4b2-4136-807e-804f46734140 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:00:55 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:00:55.222890201Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:55.466632 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4f66c_9eb4a126-482c-4458-b901-e2e7a15dfd93/kube-multus/1.log" Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:55.749317 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-nqwsg_7f25c5a9-b9c7-4220-a892-362cf6b33878/egress-router-binary-copy/1.log" Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:55.764935 2199 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b\": container with ID starting with 678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b not found: ID does not exist" containerID="678975d87081778a7c85cd96e6752232a79772c550386cea5a1d095e2cf00e3b" Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:55.764988 2199 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:55.779755 2199 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c\": container with ID starting with d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c not found: ID does not exist" containerID="d52f324dfc8a092c21054114fedb54e965dfcdc30bfc24a7458576f7e7fd0c3c" Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:55.779794 2199 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:55.794849 2199 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19\": container with ID starting with ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19 not found: ID does not exist" containerID="ba9507283288dc55e5e4de848809bcfb93392e83daf8b4dccde024e43dfa5e19" Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:55.794885 2199 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:55.810617 2199 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4\": container with ID starting with cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4 not found: ID does not exist" containerID="cbd5c705b58abf755af4100a7838f6612b48c3cc2bd31a0c142d0356c420e8c4" Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:55.810653 2199 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:55.827229 2199 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef\": container with ID starting with 5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef not found: ID does not exist" containerID="5324dadf3de087b7ec951299f8ab41197b9d82a42a278f72a9b6f9ed0455d8ef" Feb 23 20:00:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:00:55.827285 2199 log.go:198] http: superfluous response.WriteHeader call from github.com/emicklei/go-restful/v3.(*Response).WriteHeader (response.go:221) Feb 23 20:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:56.291718 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:56.292007 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:56.292240 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:56.292300 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:00:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:59.217012 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:59.217362 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:59.217678 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:00:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:00:59.217704 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:01:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:01.217040 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:01:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:01.217466328Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:01.217536381Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:01:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:01.222989317Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/43f65e5b-cd5f-40a3-a459-9c3831632ca7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:01:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:01.223026035Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:01:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:01.575303446Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=4945b252-380f-4349-a43e-f5e06df34a5d name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:01:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:01.575689 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-ncxb7_0976617f-18ed-4a73-a7d8-ac54cf69ab93/csi-driver/43.log" Feb 23 20:01:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:01.602351462Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=47a217cf-7a4f-4f24-8351-a3dbffded579 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:01:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:01.602713 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-ncxb7_0976617f-18ed-4a73-a7d8-ac54cf69ab93/csi-driver/43.log" Feb 23 20:01:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:01.615117 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-ncxb7_0976617f-18ed-4a73-a7d8-ac54cf69ab93/csi-node-driver-registrar/1.log" Feb 23 20:01:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:01.627982 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-ncxb7_0976617f-18ed-4a73-a7d8-ac54cf69ab93/csi-liveness-probe/1.log" Feb 23 20:01:03 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:03.216772 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:01:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:03.217202942Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:03.217286022Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:01:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:03.223464277Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/418a63e4-de79-4cca-9af4-e094d9cb6d9c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:01:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:03.223504053Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:01:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:04.217409 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:01:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:04.218063 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:01:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:08.530116 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2fx68_ff7777c7-a1dc-413e-8da1-c4ba07527037/machine-config-daemon/1.log" Feb 23 20:01:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:08.546405 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2fx68_ff7777c7-a1dc-413e-8da1-c4ba07527037/oauth-proxy/1.log" Feb 23 20:01:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:13.350653 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/ovn-controller/3.log" Feb 23 20:01:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:15.216350 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:01:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:15.216886 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:01:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:17.109997361Z" level=warning msg="Failed to find container exit file for b4f57cb23a798e177545e914f41a13b9bb35feb557432989cdce559214c2ecfa: timed out waiting for the condition" id=42ae9132-b2c3-4af8-a389-a23e7747f529 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:01:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:17.127990 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/ovn-controller/2.log" Feb 23 20:01:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:17.140305 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/ovn-acl-logging/1.log" Feb 23 20:01:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:17.154112 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/kube-rbac-proxy/1.log" Feb 23 20:01:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:17.169285 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/kube-rbac-proxy-ovn-metrics/1.log" Feb 23 20:01:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:17.189841 2199 logs.go:323] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gzbrl_7da00340-9715-48ac-b144-4705de276bf5/ovnkube-node/1.log" Feb 23 20:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:24.234001104Z" level=info msg="NetworkStart: stopping network for sandbox 112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:24.234125220Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/51c7be9f-fa19-495d-a0aa-4cb0f17f2ad7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:24.234186843Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:24.234197991Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:01:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:24.234207488Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.247000129Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.247049961Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 systemd[1]: run-utsns-86181247\x2dba3e\x2d4540\x2da2e3\x2d7b1491a6a523.mount: Deactivated successfully. Feb 23 20:01:26 ip-10-0-136-68 systemd[1]: run-ipcns-86181247\x2dba3e\x2d4540\x2da2e3\x2d7b1491a6a523.mount: Deactivated successfully. Feb 23 20:01:26 ip-10-0-136-68 systemd[1]: run-netns-86181247\x2dba3e\x2d4540\x2da2e3\x2d7b1491a6a523.mount: Deactivated successfully. Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.267344730Z" level=info msg="runSandbox: deleting pod ID 1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a from idIndex" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.267387085Z" level=info msg="runSandbox: removing pod sandbox 1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.267433872Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.267446933Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a-userdata-shm.mount: Deactivated successfully. Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.272302581Z" level=info msg="runSandbox: removing pod sandbox from storage: 1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.273894043Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:26.273923493Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=accc4eb8-0085-4c71-b636-47a759e393a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:26.274146 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:26.274218 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:26.274290 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:26.274383 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(1b1807e1323a2720d1b191ae653d1c24ccc46691e3ac6fc1ceb9b1a46296240a): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:26.292666 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:26.292927 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:26.293170 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:01:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:26.293201 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:01:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:29.217370 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:01:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:29.217767 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:01:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:37.216630 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:01:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:37.217056000Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:37.217126485Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:01:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:37.222492653Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/5817805e-5fda-4b2e-85ea-2e4b46c8b0f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:01:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:37.222528441Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:01:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:40.234227003Z" level=info msg="NetworkStart: stopping network for sandbox f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:40.234372572Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/fc82ae67-a4b2-4136-807e-804f46734140 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:01:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:40.234415415Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:01:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:40.234428102Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:01:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:40.234437701Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:01:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:44.217002 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:01:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:44.217638 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:01:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:46.234388885Z" level=info msg="NetworkStart: stopping network for sandbox 012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:46.234517487Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/43f65e5b-cd5f-40a3-a459-9c3831632ca7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:01:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:46.234545233Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:01:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:46.234555917Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:01:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:46.234562751Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:01:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:48.235290744Z" level=info msg="NetworkStart: stopping network for sandbox 2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:01:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:48.235409790Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/418a63e4-de79-4cca-9af4-e094d9cb6d9c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:01:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:48.235449369Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:01:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:48.235460346Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:01:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:01:48.235469619Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:56.292220 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:56.292550 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:56.292842 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:01:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:56.292869 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:01:59 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:01:59.217126 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:01:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:01:59.217541 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.243546502Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.243597717Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 systemd[1]: run-utsns-51c7be9f\x2dfa19\x2d495d\x2da0aa\x2d4cb0f17f2ad7.mount: Deactivated successfully. Feb 23 20:02:09 ip-10-0-136-68 systemd[1]: run-ipcns-51c7be9f\x2dfa19\x2d495d\x2da0aa\x2d4cb0f17f2ad7.mount: Deactivated successfully. Feb 23 20:02:09 ip-10-0-136-68 systemd[1]: run-netns-51c7be9f\x2dfa19\x2d495d\x2da0aa\x2d4cb0f17f2ad7.mount: Deactivated successfully. Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.273354924Z" level=info msg="runSandbox: deleting pod ID 112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15 from idIndex" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.273399884Z" level=info msg="runSandbox: removing pod sandbox 112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.273447149Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.273463909Z" level=info msg="runSandbox: unmounting shmPath for sandbox 112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15-userdata-shm.mount: Deactivated successfully. Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.280320176Z" level=info msg="runSandbox: removing pod sandbox from storage: 112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.281907896Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:09.281937433Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=c7390619-438d-4f53-8968-fcb3a113d46d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:09.282157 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:02:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:09.282229 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:02:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:09.282286 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:02:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:09.282371 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(112b4cc6e41c0dd51636c3dfa9f38f9b80570dfdeafbc2dd5c5a1d686c130e15): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:02:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:02:11.217339 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:02:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:11.217807 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:02:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:22.236532433Z" level=info msg="NetworkStart: stopping network for sandbox b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:22.236637458Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/5817805e-5fda-4b2e-85ea-2e4b46c8b0f5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:02:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:22.236667645Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:02:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:22.236675794Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:02:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:22.236683552Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:02:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:23.216939 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:23.217561 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:23.217826 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:23.217861 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:02:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:02:24.217277 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:02:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:24.217680620Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:24.217733896Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:02:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:24.223395944Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/a9056ad6-6983-4ff7-9251-ed77be66eb48 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:02:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:24.223419942Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:02:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:02:25.216670 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:02:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:25.217077 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.244582248Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.244626887Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 systemd[1]: run-utsns-fc82ae67\x2da4b2\x2d4136\x2d807e\x2d804f46734140.mount: Deactivated successfully. Feb 23 20:02:25 ip-10-0-136-68 systemd[1]: run-ipcns-fc82ae67\x2da4b2\x2d4136\x2d807e\x2d804f46734140.mount: Deactivated successfully. Feb 23 20:02:25 ip-10-0-136-68 systemd[1]: run-netns-fc82ae67\x2da4b2\x2d4136\x2d807e\x2d804f46734140.mount: Deactivated successfully. Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.268335395Z" level=info msg="runSandbox: deleting pod ID f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3 from idIndex" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.268378197Z" level=info msg="runSandbox: removing pod sandbox f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.268407465Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.268424598Z" level=info msg="runSandbox: unmounting shmPath for sandbox f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3-userdata-shm.mount: Deactivated successfully. Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.273327164Z" level=info msg="runSandbox: removing pod sandbox from storage: f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.274883208Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:25.274918946Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=a0b55718-d306-4eb2-8ac1-727eb9260a2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:25.275129 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:02:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:25.275185 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:02:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:25.275208 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:02:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:25.275316 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f771ee103dfe7ed276ddd1e47c755e03da99505bc3fdf48da05c8e2a4aba61e3): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:26.291785 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:26.292008 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:26.292305 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:26.292334 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.244689158Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.244738833Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 systemd[1]: run-utsns-43f65e5b\x2dcd5f\x2d40a3\x2da459\x2d9c3831632ca7.mount: Deactivated successfully. Feb 23 20:02:31 ip-10-0-136-68 systemd[1]: run-ipcns-43f65e5b\x2dcd5f\x2d40a3\x2da459\x2d9c3831632ca7.mount: Deactivated successfully. Feb 23 20:02:31 ip-10-0-136-68 systemd[1]: run-netns-43f65e5b\x2dcd5f\x2d40a3\x2da459\x2d9c3831632ca7.mount: Deactivated successfully. Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.271367954Z" level=info msg="runSandbox: deleting pod ID 012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28 from idIndex" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.271415782Z" level=info msg="runSandbox: removing pod sandbox 012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.271467873Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.271483058Z" level=info msg="runSandbox: unmounting shmPath for sandbox 012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28-userdata-shm.mount: Deactivated successfully. Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.277317480Z" level=info msg="runSandbox: removing pod sandbox from storage: 012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.278939430Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:31.278971724Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=b6d62797-f2e1-460e-83f8-414438505882 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:31.279170 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:02:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:31.279233 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:02:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:31.279287 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:02:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:31.279353 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(012374effbb3698ecd4b85279bd83717ee4b69f5529b916cf81b89f24ff4eb28): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.245659050Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.245714731Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 systemd[1]: run-utsns-418a63e4\x2dde79\x2d4cca\x2d9af4\x2de094d9cb6d9c.mount: Deactivated successfully. Feb 23 20:02:33 ip-10-0-136-68 systemd[1]: run-ipcns-418a63e4\x2dde79\x2d4cca\x2d9af4\x2de094d9cb6d9c.mount: Deactivated successfully. Feb 23 20:02:33 ip-10-0-136-68 systemd[1]: run-netns-418a63e4\x2dde79\x2d4cca\x2d9af4\x2de094d9cb6d9c.mount: Deactivated successfully. Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.274319453Z" level=info msg="runSandbox: deleting pod ID 2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b from idIndex" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.274353158Z" level=info msg="runSandbox: removing pod sandbox 2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.274388082Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.274409671Z" level=info msg="runSandbox: unmounting shmPath for sandbox 2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b-userdata-shm.mount: Deactivated successfully. Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.279303688Z" level=info msg="runSandbox: removing pod sandbox from storage: 2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.280746044Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:33.280773390Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=d1b034ba-d4ce-48e4-8e17-cd362ea712a7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:33.280981 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:02:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:33.281052 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:02:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:33.281093 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:02:33 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:33.281179 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(2687e40b94d6d7c289a3c91d03fc5bd60b8140dc25a697cd4d7786ffe3553c0b): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:02:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:02:38.216595 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:02:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:38.217179 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:02:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:02:40.217629 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:02:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:40.218057156Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:40.218120148Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:02:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:40.223640242Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/22862bdb-8826-4e18-97a8-45d7d24db91c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:02:40 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:40.223669224Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:02:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:02:44.217410 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:02:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:44.217869186Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:44.217935919Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:02:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:44.223683094Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/22bcc967-ec57-43f4-8e60-210545b47538 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:02:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:44.223719430Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:02:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:02:46.216629 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:02:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:46.217057253Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:02:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:46.217121864Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:02:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:46.222666479Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c9c083f2-11b5-4cd7-9ef1-944502df7b62 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:02:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:02:46.222694787Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:02:50 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:02:50.217661 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:02:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:50.218269 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:56.292359 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:56.292634 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:56.292857 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:02:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:02:56.292887 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:03:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:03:02.216923 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:03:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:02.217527 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.246726314Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.246781390Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 systemd[1]: run-utsns-5817805e\x2d5fda\x2d4b2e\x2d85ea\x2d2e4b46c8b0f5.mount: Deactivated successfully. Feb 23 20:03:07 ip-10-0-136-68 systemd[1]: run-ipcns-5817805e\x2d5fda\x2d4b2e\x2d85ea\x2d2e4b46c8b0f5.mount: Deactivated successfully. Feb 23 20:03:07 ip-10-0-136-68 systemd[1]: run-netns-5817805e\x2d5fda\x2d4b2e\x2d85ea\x2d2e4b46c8b0f5.mount: Deactivated successfully. Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.267327821Z" level=info msg="runSandbox: deleting pod ID b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d from idIndex" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.267368907Z" level=info msg="runSandbox: removing pod sandbox b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.267412986Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.267429136Z" level=info msg="runSandbox: unmounting shmPath for sandbox b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d-userdata-shm.mount: Deactivated successfully. Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.273309483Z" level=info msg="runSandbox: removing pod sandbox from storage: b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.274845300Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:07.274873502Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=0590ca8c-169e-4923-ad03-f795d244508b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:07.275081 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:03:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:07.275143 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:03:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:07.275169 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:03:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:07.275229 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(b29859b4592d6e3a8ee69a066edff99f766b51c428b6183af0db207d1b74922d): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:03:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:09.234724145Z" level=info msg="NetworkStart: stopping network for sandbox 421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:09.234840443Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/a9056ad6-6983-4ff7-9251-ed77be66eb48 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:03:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:09.234866870Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:03:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:09.234877990Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:03:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:09.234886634Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:03:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:03:13.217113 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:03:13 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:13.217555 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:03:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:03:21.216962 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:03:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:21.217369510Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:21.217427975Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:03:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:21.222520245Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/36a111d1-507e-4fd9-a76d-0715dc5ff6ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:03:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:21.222546914Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:03:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:25.235166229Z" level=info msg="NetworkStart: stopping network for sandbox 33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:25.235311113Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/22862bdb-8826-4e18-97a8-45d7d24db91c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:03:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:25.235339813Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:03:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:25.235347456Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:03:25 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:25.235353936Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:03:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:03:26.217090 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:26.217527 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:26.291779 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:26.292003 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:26.292271 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:26.292323 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:03:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:29.237031895Z" level=info msg="NetworkStart: stopping network for sandbox dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:29.237162649Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/22bcc967-ec57-43f4-8e60-210545b47538 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:03:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:29.237202511Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:03:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:29.237214750Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:03:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:29.237225620Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:03:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:31.236140679Z" level=info msg="NetworkStart: stopping network for sandbox 56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:31.236304197Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/c9c083f2-11b5-4cd7-9ef1-944502df7b62 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:03:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:31.236336117Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:03:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:31.236344556Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:03:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:31.236351397Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:03:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:03:40.230958 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:03:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:40.231434 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:03:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:03:51.216591 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:03:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:51.217139 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:03:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:53.216940 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:53.217240 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:53.217580 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:53.217624 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.244893364Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.244944342Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 systemd[1]: run-utsns-a9056ad6\x2d6983\x2d4ff7\x2d9251\x2ded77be66eb48.mount: Deactivated successfully. Feb 23 20:03:54 ip-10-0-136-68 systemd[1]: run-ipcns-a9056ad6\x2d6983\x2d4ff7\x2d9251\x2ded77be66eb48.mount: Deactivated successfully. Feb 23 20:03:54 ip-10-0-136-68 systemd[1]: run-netns-a9056ad6\x2d6983\x2d4ff7\x2d9251\x2ded77be66eb48.mount: Deactivated successfully. Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.281333333Z" level=info msg="runSandbox: deleting pod ID 421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527 from idIndex" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.281368541Z" level=info msg="runSandbox: removing pod sandbox 421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.281400973Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.281423445Z" level=info msg="runSandbox: unmounting shmPath for sandbox 421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527-userdata-shm.mount: Deactivated successfully. Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.287305478Z" level=info msg="runSandbox: removing pod sandbox from storage: 421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.288813113Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:03:54.288842514Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=1419a13d-6028-48f8-bbc3-349ee09bf167 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:03:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:54.289032 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:03:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:54.289084 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:03:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:54.289112 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:03:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:54.289166 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(421e7c28efebad90fb1f9a18c8fc80d57f0b9234b1065d0e4e08ba683d6ca527): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:56.292697 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:56.293002 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:56.293231 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:03:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:03:56.293293 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:04:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:06.216722 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:04:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:06.217308 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:06.234214204Z" level=info msg="NetworkStart: stopping network for sandbox aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:06.234366669Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/36a111d1-507e-4fd9-a76d-0715dc5ff6ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:06.234406015Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:06.234416733Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:04:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:06.234427962Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:04:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:07.216508 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:04:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:07.216942311Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:07.217006245Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:04:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:07.222222107Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/2b50d573-2efc-4a6a-96de-dda1fb80c8ee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:04:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:07.222282536Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.245301998Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.245345155Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 systemd[1]: run-utsns-22862bdb\x2d8826\x2d4e18\x2d97a8\x2d45d7d24db91c.mount: Deactivated successfully. Feb 23 20:04:10 ip-10-0-136-68 systemd[1]: run-ipcns-22862bdb\x2d8826\x2d4e18\x2d97a8\x2d45d7d24db91c.mount: Deactivated successfully. Feb 23 20:04:10 ip-10-0-136-68 systemd[1]: run-netns-22862bdb\x2d8826\x2d4e18\x2d97a8\x2d45d7d24db91c.mount: Deactivated successfully. Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.267322407Z" level=info msg="runSandbox: deleting pod ID 33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c from idIndex" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.267354012Z" level=info msg="runSandbox: removing pod sandbox 33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.267377768Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.267390280Z" level=info msg="runSandbox: unmounting shmPath for sandbox 33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c-userdata-shm.mount: Deactivated successfully. Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.273315946Z" level=info msg="runSandbox: removing pod sandbox from storage: 33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.274868818Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:10.274902299Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=ca787d9e-9298-46f7-b29f-f758f74b73ec name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:10.275066 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:04:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:10.275112 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:04:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:10.275134 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:04:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:10.275189 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(33718856ce4544eded8450a0e2750576f524dcd4a06830ecdc9641a2a0cdfa1c): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.246220101Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.246287429Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 systemd[1]: run-utsns-22bcc967\x2dec57\x2d43f4\x2d8e60\x2d210545b47538.mount: Deactivated successfully. Feb 23 20:04:14 ip-10-0-136-68 systemd[1]: run-ipcns-22bcc967\x2dec57\x2d43f4\x2d8e60\x2d210545b47538.mount: Deactivated successfully. Feb 23 20:04:14 ip-10-0-136-68 systemd[1]: run-netns-22bcc967\x2dec57\x2d43f4\x2d8e60\x2d210545b47538.mount: Deactivated successfully. Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.275336754Z" level=info msg="runSandbox: deleting pod ID dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20 from idIndex" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.275367606Z" level=info msg="runSandbox: removing pod sandbox dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.275389400Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.275405571Z" level=info msg="runSandbox: unmounting shmPath for sandbox dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20-userdata-shm.mount: Deactivated successfully. Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.291322921Z" level=info msg="runSandbox: removing pod sandbox from storage: dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.292830171Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:14.292857460Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=5880fbad-2d40-40ef-8658-3ccb18a6a460 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:14.293038 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:04:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:14.293088 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:04:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:14.293112 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:04:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:14.293181 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(dd28e9ba6553760c149506004234697ea85c0310bdc27f3af89eb0bf69ae0e20): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.245320164Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.245360481Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 systemd[1]: run-utsns-c9c083f2\x2d11b5\x2d4cd7\x2d9ef1\x2d944502df7b62.mount: Deactivated successfully. Feb 23 20:04:16 ip-10-0-136-68 systemd[1]: run-ipcns-c9c083f2\x2d11b5\x2d4cd7\x2d9ef1\x2d944502df7b62.mount: Deactivated successfully. Feb 23 20:04:16 ip-10-0-136-68 systemd[1]: run-netns-c9c083f2\x2d11b5\x2d4cd7\x2d9ef1\x2d944502df7b62.mount: Deactivated successfully. Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.281326611Z" level=info msg="runSandbox: deleting pod ID 56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2 from idIndex" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.281365617Z" level=info msg="runSandbox: removing pod sandbox 56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.281400757Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.281422506Z" level=info msg="runSandbox: unmounting shmPath for sandbox 56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2-userdata-shm.mount: Deactivated successfully. Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.293289307Z" level=info msg="runSandbox: removing pod sandbox from storage: 56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.294760616Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:16.294793832Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=2a6d9571-86c8-4e7c-9196-2f046f5d6dc9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:16.294987 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:04:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:16.295047 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:04:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:16.295104 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:04:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:16.295180 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(56b557277f7320784545c0b1507312052cf1ef862bed99a695e2e5e22e6368a2): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:04:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:18.217404 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:04:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:18.217945 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:04:23 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:23.216846 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:04:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:23.217181355Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:23.217263603Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:04:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:23.222498575Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2e6d9969-634c-4505-8064-ff0aeca0af7d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:04:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:23.222673405Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:26.292178 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:26.292414 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:26.292653 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:04:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:26.292690 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:04:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:27.216908 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:04:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:27.217228753Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:27.217308987Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:04:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:27.222553338Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/9214ece2-236a-4797-b67b-53b22ca2715f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:04:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:27.222579121Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:04:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:29.216989 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:04:29 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:29.217078 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:04:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:29.217520970Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:29.217602172Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:04:29 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:29.217568 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:04:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:29.223172775Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/164e81e9-f3c6-44ba-a68c-65e4f881c769 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:04:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:29.223206169Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:04:42 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:42.217498 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.218341236Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=8cd405d5-3d5d-4f9f-a924-69d32fe726e8 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.218537312Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=8cd405d5-3d5d-4f9f-a924-69d32fe726e8 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.219223382Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=2e0efdfc-474e-4727-85e4-9d908fba3f96 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.219405308Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=2e0efdfc-474e-4727-85e4-9d908fba3f96 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.220105648Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c66d20ac-fe33-4f7a-8835-cce2d814e8b4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.220201251Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:04:42 ip-10-0-136-68 systemd[1]: Started crio-conmon-a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c.scope. Feb 23 20:04:42 ip-10-0-136-68 systemd[1]: Started libcontainer container a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c. Feb 23 20:04:42 ip-10-0-136-68 conmon[21188]: conmon a3a9d76bc9909e0de9ee : Failed to write to cgroup.event_control Operation not supported Feb 23 20:04:42 ip-10-0-136-68 systemd[1]: crio-conmon-a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c.scope: Deactivated successfully. Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.364080295Z" level=info msg="Created container a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c66d20ac-fe33-4f7a-8835-cce2d814e8b4 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.364576699Z" level=info msg="Starting container: a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c" id=69a5551f-52cb-4693-b569-898ee7602483 name=/runtime.v1.RuntimeService/StartContainer Feb 23 20:04:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:42.371831254Z" level=info msg="Started container" PID=21200 containerID=a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=69a5551f-52cb-4693-b569-898ee7602483 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 20:04:42 ip-10-0-136-68 systemd[1]: crio-a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c.scope: Deactivated successfully. Feb 23 20:04:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:46.795944843Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=c91c31eb-b310-4343-b85f-fe102a380614 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:04:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:46.796805 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c} Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.244364413Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.244421835Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 systemd[1]: run-utsns-36a111d1\x2d507e\x2d4fd9\x2da76d\x2d0715dc5ff6ce.mount: Deactivated successfully. Feb 23 20:04:51 ip-10-0-136-68 systemd[1]: run-ipcns-36a111d1\x2d507e\x2d4fd9\x2da76d\x2d0715dc5ff6ce.mount: Deactivated successfully. Feb 23 20:04:51 ip-10-0-136-68 systemd[1]: run-netns-36a111d1\x2d507e\x2d4fd9\x2da76d\x2d0715dc5ff6ce.mount: Deactivated successfully. Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.286382464Z" level=info msg="runSandbox: deleting pod ID aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8 from idIndex" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.286422048Z" level=info msg="runSandbox: removing pod sandbox aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.286450653Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.286463037Z" level=info msg="runSandbox: unmounting shmPath for sandbox aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8-userdata-shm.mount: Deactivated successfully. Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.292295396Z" level=info msg="runSandbox: removing pod sandbox from storage: aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.293842317Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:51.293872592Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=ca7f3933-5734-4cc8-9dd1-78be80861c7d name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:51.294059 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:04:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:51.294110 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:04:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:51.294133 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:04:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:51.294191 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(aecc99ebd792a46788ef5789cdbf7f1208e2d30b84c388d0617f37d00d8c70b8): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:04:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:52.233849664Z" level=info msg="NetworkStart: stopping network for sandbox 87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:04:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:52.233973386Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/2b50d573-2efc-4a6a-96de-dda1fb80c8ee Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:04:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:52.234011917Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:04:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:52.234020927Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:04:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:04:52.234027688Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:04:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:54.872402 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:04:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:04:54.872461 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:56.291963 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:56.292213 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:56.292485 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:04:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:04:56.292526 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:05:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:04.872883 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:05:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:04.872935 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:05:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:05.216453 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:05:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:05.216831628Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:05.216897879Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:05:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:05.222457147Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/4559d7af-c2a0-4ee4-8228-332e7d2da2d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:05:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:05.222480999Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:08.234558541Z" level=info msg="NetworkStart: stopping network for sandbox 719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:08.234679472Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2e6d9969-634c-4505-8064-ff0aeca0af7d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:08.234719159Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:08.234735015Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:05:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:08.234748890Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:05:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:12.234137564Z" level=info msg="NetworkStart: stopping network for sandbox b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:12.234274836Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/9214ece2-236a-4797-b67b-53b22ca2715f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:05:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:12.234323519Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:05:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:12.234330986Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:05:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:12.234337493Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:05:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:14.237230282Z" level=info msg="NetworkStart: stopping network for sandbox c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:14.237365663Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/164e81e9-f3c6-44ba-a68c-65e4f881c769 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:05:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:14.237393332Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:05:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:14.237400632Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:05:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:14.237407231Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:05:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:14.872647 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:05:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:14.872698 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:05:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:20.257752723Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=c4f09705-4079-4c33-a5eb-c322201f8e24 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:05:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:20.257940723Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c4f09705-4079-4c33-a5eb-c322201f8e24 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:05:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:23.217471 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:23.217760 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:23.217964 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:23.218005 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:05:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:24.872387 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:05:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:24.872448 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:26.292164 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:26.292438 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:26.292673 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:26.292699 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:05:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:34.872303 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:05:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:34.872357 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:05:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:34.872380 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 20:05:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:34.872891 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 20:05:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:34.873062 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c" gracePeriod=30 Feb 23 20:05:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:34.873267428Z" level=info msg="Stopping container: a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c (timeout: 30s)" id=c1da4579-8d42-4f16-8f24-9a37948cf2c1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.243296949Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.243348462Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 systemd[1]: run-utsns-2b50d573\x2d2efc\x2d4a6a\x2d96de\x2ddda1fb80c8ee.mount: Deactivated successfully. Feb 23 20:05:37 ip-10-0-136-68 systemd[1]: run-ipcns-2b50d573\x2d2efc\x2d4a6a\x2d96de\x2ddda1fb80c8ee.mount: Deactivated successfully. Feb 23 20:05:37 ip-10-0-136-68 systemd[1]: run-netns-2b50d573\x2d2efc\x2d4a6a\x2d96de\x2ddda1fb80c8ee.mount: Deactivated successfully. Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.268334083Z" level=info msg="runSandbox: deleting pod ID 87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16 from idIndex" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.268365096Z" level=info msg="runSandbox: removing pod sandbox 87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.268395723Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.268413857Z" level=info msg="runSandbox: unmounting shmPath for sandbox 87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16-userdata-shm.mount: Deactivated successfully. Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.281325783Z" level=info msg="runSandbox: removing pod sandbox from storage: 87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.283021390Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:37.283050520Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=0e243598-e8b5-4cbd-a23c-96f162911bce name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:37.283285 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:05:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:37.283344 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:05:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:37.283368 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:05:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:37.283427 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(87ba1a5f1e21cbaa92adc93c8cb3dd7d42a89e98b903977c19d73a36ffd06c16): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:05:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:38.635003745Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=c1da4579-8d42-4f16-8f24-9a37948cf2c1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:05:38 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-dd73ddb161220388d20bfa3d4a61e467edf541e66dff26e8192256e03475ee96-merged.mount: Deactivated successfully. Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.415926513Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=c1da4579-8d42-4f16-8f24-9a37948cf2c1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.417668825Z" level=info msg="Stopped container a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=c1da4579-8d42-4f16-8f24-9a37948cf2c1 name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.418336317Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=da4c3275-198c-4ad2-b60b-6b4fb2b99cd5 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.418501742Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=da4c3275-198c-4ad2-b60b-6b4fb2b99cd5 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.419091462Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=e7a4ed12-6562-4bda-b316-9205c2bd64f6 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.419240138Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=e7a4ed12-6562-4bda-b316-9205c2bd64f6 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.419876239Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=5774242d-44fb-473a-9c9a-dccb2a0494dd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.419981143Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:05:42 ip-10-0-136-68 systemd[1]: Started crio-conmon-f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0.scope. Feb 23 20:05:42 ip-10-0-136-68 systemd[1]: Started libcontainer container f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0. Feb 23 20:05:42 ip-10-0-136-68 conmon[21341]: conmon f593d401d06964c92f54 : Failed to write to cgroup.event_control Operation not supported Feb 23 20:05:42 ip-10-0-136-68 systemd[1]: crio-conmon-f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0.scope: Deactivated successfully. Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.558199822Z" level=info msg="Created container f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=5774242d-44fb-473a-9c9a-dccb2a0494dd name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.558778862Z" level=info msg="Starting container: f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" id=4e3b1e43-8685-4086-98f3-26dddf88cdb9 name=/runtime.v1.RuntimeService/StartContainer Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.565472676Z" level=info msg="Started container" PID=21353 containerID=f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=4e3b1e43-8685-4086-98f3-26dddf88cdb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 20:05:42 ip-10-0-136-68 systemd[1]: crio-f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0.scope: Deactivated successfully. Feb 23 20:05:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:42.627764437Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=2e28774f-830d-4e42-a3f5-4b313eb21cbf name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:05:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:46.375939673Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=b2477512-bcb7-4896-86ed-06eddde43f32 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:05:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:46.376763 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c" exitCode=-1 Feb 23 20:05:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:46.376799 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c} Feb 23 20:05:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:46.376829 2199 scope.go:115] "RemoveContainer" containerID="fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" Feb 23 20:05:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:49.216829 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:05:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:49.217224887Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:49.217307658Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:05:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:49.222832084Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/56a70365-d124-4fd8-b099-9a007c4e7ef3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:05:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:49.222867859Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:05:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:50.126007340Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=725ddd4c-519e-4906-9df2-6417052fb570 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:05:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:50.236657551Z" level=info msg="NetworkStart: stopping network for sandbox f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:50.236758168Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/4559d7af-c2a0-4ee4-8228-332e7d2da2d7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:05:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:50.236787289Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:05:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:50.236794128Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:05:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:50.236800751Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:05:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:51.139983395Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=e287e465-15d7-4d20-bd09-00eb9c603ac9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.243813784Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.243867837Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 systemd[1]: run-utsns-2e6d9969\x2d634c\x2d4505\x2d8064\x2dff0aeca0af7d.mount: Deactivated successfully. Feb 23 20:05:53 ip-10-0-136-68 systemd[1]: run-ipcns-2e6d9969\x2d634c\x2d4505\x2d8064\x2dff0aeca0af7d.mount: Deactivated successfully. Feb 23 20:05:53 ip-10-0-136-68 systemd[1]: run-netns-2e6d9969\x2d634c\x2d4505\x2d8064\x2dff0aeca0af7d.mount: Deactivated successfully. Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.272331726Z" level=info msg="runSandbox: deleting pod ID 719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440 from idIndex" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.272372870Z" level=info msg="runSandbox: removing pod sandbox 719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.272424725Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.272448127Z" level=info msg="runSandbox: unmounting shmPath for sandbox 719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440-userdata-shm.mount: Deactivated successfully. Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.276305981Z" level=info msg="runSandbox: removing pod sandbox from storage: 719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.277870999Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.277905461Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=dd22c5fc-c592-4953-9ffa-f04085a0dc95 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:53.278117 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:05:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:53.278175 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:05:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:53.278203 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:05:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:53.278299 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(719735becd338bc28e63b111ad8483f738bc648844ae891465c75282e7902440): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.876529046Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=a7889ec0-33b1-4b2d-a3b4-864803c50d91 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:05:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:53.877045729Z" level=info msg="Removing container: fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543" id=d95bb7db-16e7-4240-98cc-55206c59bb82 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:05:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:54.889087248Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=e01cb0f4-5a80-429b-b1b6-967b92329db6 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:05:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:05:54.890063 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0} Feb 23 20:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:56.292735 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:56.293043 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:56.293275 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:05:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:56.293322 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.244395194Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.244450755Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 systemd[1]: run-utsns-9214ece2\x2d236a\x2d4797\x2db67b\x2d53b22ca2715f.mount: Deactivated successfully. Feb 23 20:05:57 ip-10-0-136-68 systemd[1]: run-ipcns-9214ece2\x2d236a\x2d4797\x2db67b\x2d53b22ca2715f.mount: Deactivated successfully. Feb 23 20:05:57 ip-10-0-136-68 systemd[1]: run-netns-9214ece2\x2d236a\x2d4797\x2db67b\x2d53b22ca2715f.mount: Deactivated successfully. Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.265323972Z" level=info msg="runSandbox: deleting pod ID b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb from idIndex" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.265364926Z" level=info msg="runSandbox: removing pod sandbox b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.265400174Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.265415030Z" level=info msg="runSandbox: unmounting shmPath for sandbox b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb-userdata-shm.mount: Deactivated successfully. Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.271320452Z" level=info msg="runSandbox: removing pod sandbox from storage: b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.272926637Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.272957004Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=98b0bd57-35a2-43ce-9ce5-5e2f1ef0417b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:57.273146 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:05:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:57.273198 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:05:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:57.273225 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:05:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:57.273317 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(b83adb1a6de232ac45d2e774c13530401a9a5f8651ca38def18ea02e4553dffb): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.627104278Z" level=warning msg="Failed to find container exit file for fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: timed out waiting for the condition" id=d95bb7db-16e7-4240-98cc-55206c59bb82 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:05:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:57.651874096Z" level=info msg="Removed container fbeddbbb8be668d9a8b25933d2b1100f960a28a7cf2959aeda173dc5a4fe0543: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d95bb7db-16e7-4240-98cc-55206c59bb82 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.247988152Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.248033735Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 systemd[1]: run-utsns-164e81e9\x2df3c6\x2d44ba\x2da68c\x2d65e4f881c769.mount: Deactivated successfully. Feb 23 20:05:59 ip-10-0-136-68 systemd[1]: run-ipcns-164e81e9\x2df3c6\x2d44ba\x2da68c\x2d65e4f881c769.mount: Deactivated successfully. Feb 23 20:05:59 ip-10-0-136-68 systemd[1]: run-netns-164e81e9\x2df3c6\x2d44ba\x2da68c\x2d65e4f881c769.mount: Deactivated successfully. Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.270332537Z" level=info msg="runSandbox: deleting pod ID c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9 from idIndex" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.270375143Z" level=info msg="runSandbox: removing pod sandbox c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.270407394Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.270420421Z" level=info msg="runSandbox: unmounting shmPath for sandbox c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9-userdata-shm.mount: Deactivated successfully. Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.278325272Z" level=info msg="runSandbox: removing pod sandbox from storage: c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.280092956Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:05:59.280122306Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=eb46031a-4f7d-4839-97ca-2831a237b34b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:05:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:59.280358 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:05:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:59.280430 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:05:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:59.280473 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:05:59 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:05:59.280566 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(c2d5a108c88f68bca0b6149416856c4b380f16ec910ff339acd6ba5d3ddcd5f9): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:06:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:01.646001245Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=9357690b-c8c4-4cc1-9c80-35e4bd9121d1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:06:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:04.873016 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:06:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:04.873061 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:06:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:08.217166 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:06:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:08.217616430Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:08.217668751Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:06:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:08.223604653Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/e3b0934b-7c74-4766-a93e-e216e3c9d621 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:06:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:08.223640740Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:06:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:11.216574 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:06:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:11.217684623Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:11.217751851Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:06:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:11.225676935Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/4b3e42f8-8a16-47ec-b995-cb3263972623 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:06:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:11.225712904Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:06:13 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:13.216765 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:13.217181347Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:13.217281391Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:13.222585404Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/1608dbef-9ef7-405a-8b35-61a5c1dc5bb0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:06:13 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:13.222612394Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:06:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:14.872382 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:06:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:14.872434 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:06:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:24.872396 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:06:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:24.872455 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:26.292051 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:26.292325 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:26.292574 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:26.292602 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:06:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:34.234578743Z" level=info msg="NetworkStart: stopping network for sandbox 6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:34.234689398Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/56a70365-d124-4fd8-b099-9a007c4e7ef3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:06:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:34.234716974Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:06:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:34.234724368Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:06:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:34.234730851Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:06:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:34.872577 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:06:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:34.872629 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.245901468Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.245948457Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 systemd[1]: run-utsns-4559d7af\x2dc2a0\x2d4ee4\x2d8228\x2d332e7d2da2d7.mount: Deactivated successfully. Feb 23 20:06:35 ip-10-0-136-68 systemd[1]: run-ipcns-4559d7af\x2dc2a0\x2d4ee4\x2d8228\x2d332e7d2da2d7.mount: Deactivated successfully. Feb 23 20:06:35 ip-10-0-136-68 systemd[1]: run-netns-4559d7af\x2dc2a0\x2d4ee4\x2d8228\x2d332e7d2da2d7.mount: Deactivated successfully. Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.270338463Z" level=info msg="runSandbox: deleting pod ID f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93 from idIndex" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.270379476Z" level=info msg="runSandbox: removing pod sandbox f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.270427174Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.270442406Z" level=info msg="runSandbox: unmounting shmPath for sandbox f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93-userdata-shm.mount: Deactivated successfully. Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.276322030Z" level=info msg="runSandbox: removing pod sandbox from storage: f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.277886690Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:35.277918610Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=635a0d9b-a5d1-47ab-8551-6b5aa8b7db2e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:35.278165 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:06:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:35.278237 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:06:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:35.278304 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:06:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:35.278395 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(f4db71482035ca9105bd0cdf733ad64cf65f65daff413af3aa164466be20ad93): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:06:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:37.217715 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:37.218042 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:37.218374 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:37 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:37.218414 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:06:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:44.872491 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:06:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:44.872542 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:06:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:44.872570 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 20:06:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:44.873069 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 20:06:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:44.873222 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" gracePeriod=30 Feb 23 20:06:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:44.873392653Z" level=info msg="Stopping container: f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0 (timeout: 30s)" id=b2921379-4d37-420d-b607-089450819ae2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:06:46 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:46.216685 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:06:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:46.217091172Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:46.217145891Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:06:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:46.222691096Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f6c6e244-e057-4d98-a4ea-342d63e5ccc0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:06:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:46.222714553Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:06:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:48.632917453Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=b2921379-4d37-420d-b607-089450819ae2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:06:48 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-1326ea8ffb1a107ef852babf2517161015ecbcee3595e4ea72d02077826b959b-merged.mount: Deactivated successfully. Feb 23 20:06:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:52.418927384Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=b2921379-4d37-420d-b607-089450819ae2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:06:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:52.422080534Z" level=info msg="Stopped container f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b2921379-4d37-420d-b607-089450819ae2 name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:06:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:52.422570 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:06:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:52.468849464Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=7337fb30-858f-4576-bf68-dbde0d1c302f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:06:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:53.235365749Z" level=info msg="NetworkStart: stopping network for sandbox 862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:53.235484731Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/e3b0934b-7c74-4766-a93e-e216e3c9d621 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:06:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:53.235516112Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:06:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:53.235527155Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:06:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:53.235533669Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:06:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:56.217083172Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=5409b396-0143-4b95-a03b-5f7d840b5e66 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:06:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:56.218091 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" exitCode=-1 Feb 23 20:06:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:56.219612 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0} Feb 23 20:06:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:56.219642 2199 scope.go:115] "RemoveContainer" containerID="a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c" Feb 23 20:06:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:56.237706268Z" level=info msg="NetworkStart: stopping network for sandbox 1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:56.237855154Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/4b3e42f8-8a16-47ec-b995-cb3263972623 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:06:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:56.237894594Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:06:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:56.237906903Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:06:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:56.237917696Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:56.292086 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:56.292367 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:56.292581 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:06:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:56.292618 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:06:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:06:57.220085 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:06:57 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:06:57.220662 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:06:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:58.235858547Z" level=info msg="NetworkStart: stopping network for sandbox 949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:06:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:58.235962152Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/1608dbef-9ef7-405a-8b35-61a5c1dc5bb0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:06:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:58.235989765Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:06:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:58.235997114Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:06:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:58.236005536Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:06:59 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:06:59.979945979Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=ca6af8e9-0f0d-40b2-a083-d861c7ede8ab name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:07:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:03.739895123Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=c44faf1e-bcab-4492-ba9e-e4342cdbd24a name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:07:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:03.740463259Z" level=info msg="Removing container: a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c" id=83022c48-592f-4604-9a79-6e8555f5737d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:07:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:07.489926850Z" level=warning msg="Failed to find container exit file for a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: timed out waiting for the condition" id=83022c48-592f-4604-9a79-6e8555f5737d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:07:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:07.513565368Z" level=info msg="Removed container a3a9d76bc9909e0de9ee32020babb3260ad7ac88201ac9ef96ab2ea6d6f80e5c: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=83022c48-592f-4604-9a79-6e8555f5737d name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:07:11 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:07:11.217374 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:07:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:11.217796 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:07:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:11.983030808Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=8b340be4-3c8e-48e8-886f-eb873742fbd1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.244550529Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.244605307Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 systemd[1]: run-utsns-56a70365\x2dd124\x2d4fd8\x2db099\x2d9a007c4e7ef3.mount: Deactivated successfully. Feb 23 20:07:19 ip-10-0-136-68 systemd[1]: run-ipcns-56a70365\x2dd124\x2d4fd8\x2db099\x2d9a007c4e7ef3.mount: Deactivated successfully. Feb 23 20:07:19 ip-10-0-136-68 systemd[1]: run-netns-56a70365\x2dd124\x2d4fd8\x2db099\x2d9a007c4e7ef3.mount: Deactivated successfully. Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.264322258Z" level=info msg="runSandbox: deleting pod ID 6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8 from idIndex" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.264363483Z" level=info msg="runSandbox: removing pod sandbox 6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.264401607Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.264427146Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8-userdata-shm.mount: Deactivated successfully. Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.268310496Z" level=info msg="runSandbox: removing pod sandbox from storage: 6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.269799675Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:19.269827887Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=fea167c6-8bd2-482d-8e3b-295dc919499f name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:19.270036 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:07:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:19.270097 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:07:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:19.270139 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:07:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:19.270219 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6f93104058e4e9205028645e2d598d42106650866be6d0808cbe867b53d82fd8): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:07:25 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:07:25.217396 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:07:25 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:25.217785 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:26.292294 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:26.292610 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:26.292797 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:07:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:26.292829 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:31.234292388Z" level=info msg="NetworkStart: stopping network for sandbox a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:31.234404687Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/f6c6e244-e057-4d98-a4ea-342d63e5ccc0 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:31.234435164Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:31.234448009Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:07:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:31.234455151Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:07:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:07:34.217397 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:07:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:34.217826919Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:34.217894707Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:07:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:34.223452084Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/0f478664-0fbb-482f-b8ac-a4d521dc4484 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:07:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:34.223481897Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.245584079Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.245625788Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 systemd[1]: run-utsns-e3b0934b\x2d7c74\x2d4766\x2da93e\x2de216e3c9d621.mount: Deactivated successfully. Feb 23 20:07:38 ip-10-0-136-68 systemd[1]: run-ipcns-e3b0934b\x2d7c74\x2d4766\x2da93e\x2de216e3c9d621.mount: Deactivated successfully. Feb 23 20:07:38 ip-10-0-136-68 systemd[1]: run-netns-e3b0934b\x2d7c74\x2d4766\x2da93e\x2de216e3c9d621.mount: Deactivated successfully. Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.268317423Z" level=info msg="runSandbox: deleting pod ID 862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05 from idIndex" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.268352953Z" level=info msg="runSandbox: removing pod sandbox 862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.268377556Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.268391377Z" level=info msg="runSandbox: unmounting shmPath for sandbox 862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05-userdata-shm.mount: Deactivated successfully. Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.275329391Z" level=info msg="runSandbox: removing pod sandbox from storage: 862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.277000005Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:38.277031574Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=6cd0c699-a4d3-4851-b924-831191676f3b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:38.277203 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:07:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:38.277286 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:07:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:38.277323 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:07:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:38.277376 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(862f8c7b0b0d1e9c55f8b53fd94b3ce2d5341457b910fa9415544ffa35ed2d05): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:07:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:07:40.217333 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:07:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:40.217896 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.247807339Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.247860008Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 systemd[1]: run-utsns-4b3e42f8\x2d8a16\x2d47ec\x2db995\x2dcb3263972623.mount: Deactivated successfully. Feb 23 20:07:41 ip-10-0-136-68 systemd[1]: run-ipcns-4b3e42f8\x2d8a16\x2d47ec\x2db995\x2dcb3263972623.mount: Deactivated successfully. Feb 23 20:07:41 ip-10-0-136-68 systemd[1]: run-netns-4b3e42f8\x2d8a16\x2d47ec\x2db995\x2dcb3263972623.mount: Deactivated successfully. Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.281352349Z" level=info msg="runSandbox: deleting pod ID 1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a from idIndex" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.281397434Z" level=info msg="runSandbox: removing pod sandbox 1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.281432451Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.281447208Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a-userdata-shm.mount: Deactivated successfully. Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.298359873Z" level=info msg="runSandbox: removing pod sandbox from storage: 1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.300114119Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:41.300148724Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=d2aaf204-43b0-4af1-bce9-259b5d37d6a2 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:41.300430 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:07:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:41.300502 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:07:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:41.300543 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:07:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:41.300625 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(1354fd49106d29ad4009f289fb175c0ca17737c4fd3e0d9b22be5e5e8b90350a): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.246186050Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.246235906Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 systemd[1]: run-utsns-1608dbef\x2d9ef7\x2d405a\x2d8b35\x2d61a5c1dc5bb0.mount: Deactivated successfully. Feb 23 20:07:43 ip-10-0-136-68 systemd[1]: run-ipcns-1608dbef\x2d9ef7\x2d405a\x2d8b35\x2d61a5c1dc5bb0.mount: Deactivated successfully. Feb 23 20:07:43 ip-10-0-136-68 systemd[1]: run-netns-1608dbef\x2d9ef7\x2d405a\x2d8b35\x2d61a5c1dc5bb0.mount: Deactivated successfully. Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.268333155Z" level=info msg="runSandbox: deleting pod ID 949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb from idIndex" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.268376344Z" level=info msg="runSandbox: removing pod sandbox 949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.268406203Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.268418000Z" level=info msg="runSandbox: unmounting shmPath for sandbox 949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb-userdata-shm.mount: Deactivated successfully. Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.273325229Z" level=info msg="runSandbox: removing pod sandbox from storage: 949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.274868694Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:43.274898587Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=63e13fec-7651-46c6-9b4a-b69fdfa46621 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:43.275118 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:07:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:43.275181 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:07:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:43.275205 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:07:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:43.275283 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(949a0fff6c2e7828c072bd799a8199bc694128ee4b8b1afd2bb89b04ec2008eb): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:07:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:07:53.217096 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:07:53 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:07:53.217095 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:53.217592547Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:53.217658037Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:53.217591493Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:53.217732595Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:53.224713120Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/eb389449-1b66-405b-9b6b-b1466207b99f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:53.224747712Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:53.225144619Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/e17fb6a4-a33e-457b-96a0-e9a9cdef4369 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:07:53 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:53.225173679Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:07:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:07:54.216969 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:07:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:54.217444556Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:07:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:54.217526141Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:07:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:54.223509697Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/de3ff49f-cba8-4520-8c2d-7cb6760dc2e4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:07:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:07:54.223546944Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:07:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:07:55.217140 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:07:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:55.217713 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:56.292549 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:56.292783 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:56.293032 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:07:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:07:56.293054 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:08:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:01.216709 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:01.217007 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:01.217299 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:01.217337 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:08:08 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:08:08.216661 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:08:08 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:08.217212 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.244223135Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.244299734Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 systemd[1]: run-utsns-f6c6e244\x2de057\x2d4d98\x2da4ea\x2d342d63e5ccc0.mount: Deactivated successfully. Feb 23 20:08:16 ip-10-0-136-68 systemd[1]: run-ipcns-f6c6e244\x2de057\x2d4d98\x2da4ea\x2d342d63e5ccc0.mount: Deactivated successfully. Feb 23 20:08:16 ip-10-0-136-68 systemd[1]: run-netns-f6c6e244\x2de057\x2d4d98\x2da4ea\x2d342d63e5ccc0.mount: Deactivated successfully. Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.283354764Z" level=info msg="runSandbox: deleting pod ID a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55 from idIndex" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.283397247Z" level=info msg="runSandbox: removing pod sandbox a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.283435965Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.283456911Z" level=info msg="runSandbox: unmounting shmPath for sandbox a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55-userdata-shm.mount: Deactivated successfully. Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.290316644Z" level=info msg="runSandbox: removing pod sandbox from storage: a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.291909966Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:16.291940807Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=da41d381-9e3e-42dc-b37f-ff7f47e746a6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:16.292198 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:08:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:16.292338 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:08:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:16.292366 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:08:16 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:16.292433 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(a776cffaf4701360af98eb1f7e30a951456480ce10ee199f94252233cede8b55): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:08:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:19.235745697Z" level=info msg="NetworkStart: stopping network for sandbox 6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:19.235864047Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/0f478664-0fbb-482f-b8ac-a4d521dc4484 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:08:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:19.235897373Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:08:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:19.235908396Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:08:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:19.235915348Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:08:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:08:20.216937 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:08:20 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:20.217337 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:26.291836 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:26.292100 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:26.292318 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:26.292355 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:08:31 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:08:31.216322 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:08:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:31.216680274Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:31.216741735Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:08:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:31.222061182Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/fa75af1c-a1a0-457a-932a-37e463ea8611 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:08:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:31.222086905Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:08:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:08:32.216597 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:08:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:32.216985 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.237528838Z" level=info msg="NetworkStart: stopping network for sandbox e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.237660532Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/e17fb6a4-a33e-457b-96a0-e9a9cdef4369 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.237703365Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.237715664Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.237725795Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.239560008Z" level=info msg="NetworkStart: stopping network for sandbox 21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.239667009Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/eb389449-1b66-405b-9b6b-b1466207b99f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.239703698Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.239713611Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:08:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:38.239723446Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:08:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:39.235581827Z" level=info msg="NetworkStart: stopping network for sandbox aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:08:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:39.235718006Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/de3ff49f-cba8-4520-8c2d-7cb6760dc2e4 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:08:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:39.235753745Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:08:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:39.235764368Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:08:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:08:39.235774805Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:08:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:08:44.216548 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:08:44 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:44.217124 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:08:55 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:08:55.217056 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:08:55 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:55.217460 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:56.292614 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:56.292848 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:56.293095 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:08:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:08:56.293117 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.245701294Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.245744675Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 systemd[1]: run-utsns-0f478664\x2d0fbb\x2d482f\x2db8ac\x2da4d521dc4484.mount: Deactivated successfully. Feb 23 20:09:04 ip-10-0-136-68 systemd[1]: run-ipcns-0f478664\x2d0fbb\x2d482f\x2db8ac\x2da4d521dc4484.mount: Deactivated successfully. Feb 23 20:09:04 ip-10-0-136-68 systemd[1]: run-netns-0f478664\x2d0fbb\x2d482f\x2db8ac\x2da4d521dc4484.mount: Deactivated successfully. Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.273323878Z" level=info msg="runSandbox: deleting pod ID 6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225 from idIndex" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.273357011Z" level=info msg="runSandbox: removing pod sandbox 6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.273380909Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.273408870Z" level=info msg="runSandbox: unmounting shmPath for sandbox 6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225-userdata-shm.mount: Deactivated successfully. Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.278307597Z" level=info msg="runSandbox: removing pod sandbox from storage: 6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.279775940Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:04.279804628Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=c2d62cda-6165-4030-bfc6-66b056adfd56 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:04.279948 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:09:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:04.280008 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:09:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:04.280046 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:09:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:04.280122 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(6867eb9e61dbbd8d991d7a7125eb5c3391e726b149bc45ab3662e61fe17ff225): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:09:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:07.216647 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:09:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:07.217028 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:09:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:16.234568462Z" level=info msg="NetworkStart: stopping network for sandbox ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:16.234687413Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/fa75af1c-a1a0-457a-932a-37e463ea8611 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:09:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:16.234718216Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:09:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:16.234726932Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:09:16 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:16.234737432Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:09:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:18.216732 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:09:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:18.217133788Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:18.217197700Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:09:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:18.222652138Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ba1196c5-a30a-4141-9870-e3c9da3c182d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:09:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:18.222687876Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:09:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:22.217097 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:09:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:22.217737 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.248082023Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.248127373Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.248630390Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.248687547Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 systemd[1]: run-utsns-e17fb6a4\x2da33e\x2d457b\x2d96a0\x2de9a9cdef4369.mount: Deactivated successfully. Feb 23 20:09:23 ip-10-0-136-68 systemd[1]: run-utsns-eb389449\x2d1b66\x2d405b\x2d9b6b\x2db1466207b99f.mount: Deactivated successfully. Feb 23 20:09:23 ip-10-0-136-68 systemd[1]: run-ipcns-e17fb6a4\x2da33e\x2d457b\x2d96a0\x2de9a9cdef4369.mount: Deactivated successfully. Feb 23 20:09:23 ip-10-0-136-68 systemd[1]: run-ipcns-eb389449\x2d1b66\x2d405b\x2d9b6b\x2db1466207b99f.mount: Deactivated successfully. Feb 23 20:09:23 ip-10-0-136-68 systemd[1]: run-netns-e17fb6a4\x2da33e\x2d457b\x2d96a0\x2de9a9cdef4369.mount: Deactivated successfully. Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.271323319Z" level=info msg="runSandbox: deleting pod ID e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4 from idIndex" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.271363920Z" level=info msg="runSandbox: removing pod sandbox e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.271404802Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.271423364Z" level=info msg="runSandbox: unmounting shmPath for sandbox e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.274320499Z" level=info msg="runSandbox: deleting pod ID 21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1 from idIndex" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.274349896Z" level=info msg="runSandbox: removing pod sandbox 21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.274375679Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.274388916Z" level=info msg="runSandbox: unmounting shmPath for sandbox 21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.278316109Z" level=info msg="runSandbox: removing pod sandbox from storage: e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.279899680Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.279926056Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=12713dac-a110-4c74-8bf4-249b2f1f45f9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:23.280132 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:23.280198 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:23.280236 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:23.280387 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.280315679Z" level=info msg="runSandbox: removing pod sandbox from storage: 21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.281754667Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:23.281778432Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=e60b79e4-0b8f-4219-8294-ea593974f784 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:23.281949 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:23.281989 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:23.282010 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:09:23 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:23.282059 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.246483908Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.246536584Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 systemd[1]: run-utsns-de3ff49f\x2dcba8\x2d4520\x2d8c2d\x2d7cb6760dc2e4.mount: Deactivated successfully. Feb 23 20:09:24 ip-10-0-136-68 systemd[1]: run-netns-eb389449\x2d1b66\x2d405b\x2d9b6b\x2db1466207b99f.mount: Deactivated successfully. Feb 23 20:09:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-e36db8348a0a4bac8e9227572cf015b65611c1ae7a7c9570e0db3e41c76728f4-userdata-shm.mount: Deactivated successfully. Feb 23 20:09:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-21d70f3a44d3b5c5ef3559f0323e95bcd5f1cc853b76592369e96504a1e6c3d1-userdata-shm.mount: Deactivated successfully. Feb 23 20:09:24 ip-10-0-136-68 systemd[1]: run-ipcns-de3ff49f\x2dcba8\x2d4520\x2d8c2d\x2d7cb6760dc2e4.mount: Deactivated successfully. Feb 23 20:09:24 ip-10-0-136-68 systemd[1]: run-netns-de3ff49f\x2dcba8\x2d4520\x2d8c2d\x2d7cb6760dc2e4.mount: Deactivated successfully. Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.276323386Z" level=info msg="runSandbox: deleting pod ID aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41 from idIndex" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.276359083Z" level=info msg="runSandbox: removing pod sandbox aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.276392961Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.276418985Z" level=info msg="runSandbox: unmounting shmPath for sandbox aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41-userdata-shm.mount: Deactivated successfully. Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.284307647Z" level=info msg="runSandbox: removing pod sandbox from storage: aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.285797700Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:24.285835088Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=f29ca0bb-c24f-4d52-b56a-e9d1911047c8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:24.285990 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:09:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:24.286042 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:09:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:24.286066 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:09:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:24.286127 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(aa56de30f63f8c0f31fa6eb09fd4cbd756e52efb3242961f1c8122b71d66ac41): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:26.292195 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:26.292483 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:26.292708 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:26.292738 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:09:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:31.217378 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:31.217722 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:31.217990 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:31.218037 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:09:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:34.216802 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:09:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:34.217192 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:09:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:37.216575 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:09:37 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:37.216622 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:09:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:37.216889746Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:37.216962291Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:09:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:37.216889286Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:37.217030246Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:09:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:37.224026064Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/e575449d-608b-437a-8f1d-c8cdf2a9d985 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:09:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:37.224060735Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:09:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:37.224065321Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2dd10eb4-e078-45b7-9c97-a666d5212781 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:09:37 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:37.224398574Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:09:39 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:39.216495 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:09:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:39.216756360Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:09:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:39.216808041Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:09:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:39.222191777Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/05540bcf-2838-45b7-9b54-25288ffb6ad3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:09:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:09:39.222226694Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:09:45 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:45.216793 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:09:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:45.217429 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:09:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:09:56.217191 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:56.217821 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:56.292137 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:56.292393 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:56.292621 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:09:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:09:56.292647 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.245345452Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.245387291Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 systemd[1]: run-utsns-fa75af1c\x2da1a0\x2d457a\x2d932a\x2d37e463ea8611.mount: Deactivated successfully. Feb 23 20:10:01 ip-10-0-136-68 systemd[1]: run-ipcns-fa75af1c\x2da1a0\x2d457a\x2d932a\x2d37e463ea8611.mount: Deactivated successfully. Feb 23 20:10:01 ip-10-0-136-68 systemd[1]: run-netns-fa75af1c\x2da1a0\x2d457a\x2d932a\x2d37e463ea8611.mount: Deactivated successfully. Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.271326620Z" level=info msg="runSandbox: deleting pod ID ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032 from idIndex" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.271364710Z" level=info msg="runSandbox: removing pod sandbox ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.271412285Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.271432046Z" level=info msg="runSandbox: unmounting shmPath for sandbox ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032-userdata-shm.mount: Deactivated successfully. Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.276316861Z" level=info msg="runSandbox: removing pod sandbox from storage: ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.277932314Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:01.277960315Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=554959a4-705d-4b7f-a785-702d67006ac4 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:01.278162 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:10:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:01.278216 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:10:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:01.278239 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:10:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:01.278340 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(ddd825d6edd616cb969e79a51a0049d1ccddb3a5b2e77e7918768cd2456f0032): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:03.236507520Z" level=info msg="NetworkStart: stopping network for sandbox a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:03.236629966Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/ba1196c5-a30a-4141-9870-e3c9da3c182d Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:03.236658368Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:03.236666208Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:10:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:03.236675665Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:10:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:10:07.217304 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:10:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:07.217735 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:10:12 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:10:12.216628 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:10:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:12.217061179Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:12.217136531Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:10:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:12.222998693Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/001a656a-17fd-4863-b53f-af015f286c5c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:10:12 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:12.223033788Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:10:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:20.260528727Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=aa5b7762-0da2-46b2-8b4a-b62d24b5cd1b name=/runtime.v1.ImageService/ImageStatus Feb 23 20:10:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:20.260896838Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=aa5b7762-0da2-46b2-8b4a-b62d24b5cd1b name=/runtime.v1.ImageService/ImageStatus Feb 23 20:10:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:10:22.217415 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:10:22 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:22.218010 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238663149Z" level=info msg="NetworkStart: stopping network for sandbox d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238738243Z" level=info msg="NetworkStart: stopping network for sandbox d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238786180Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/e575449d-608b-437a-8f1d-c8cdf2a9d985 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238814678Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/2dd10eb4-e078-45b7-9c97-a666d5212781 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238845972Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238854042Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238860190Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238823268Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238937826Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:10:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:22.238950087Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:10:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:24.235982651Z" level=info msg="NetworkStart: stopping network for sandbox 5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:24.236106618Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/05540bcf-2838-45b7-9b54-25288ffb6ad3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:10:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:24.236145780Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:10:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:24.236158001Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:10:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:24.236169780Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:26.292517 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:26.292789 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:26.293026 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:26.293067 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:10:36 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:10:36.216962 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:10:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:36.217555 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:10:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:46.217166 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:46.217585 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:46.217838 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:46 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:46.217990 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.246611731Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.246658782Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 systemd[1]: run-utsns-ba1196c5\x2da30a\x2d4141\x2d9870\x2de3c9da3c182d.mount: Deactivated successfully. Feb 23 20:10:48 ip-10-0-136-68 systemd[1]: run-ipcns-ba1196c5\x2da30a\x2d4141\x2d9870\x2de3c9da3c182d.mount: Deactivated successfully. Feb 23 20:10:48 ip-10-0-136-68 systemd[1]: run-netns-ba1196c5\x2da30a\x2d4141\x2d9870\x2de3c9da3c182d.mount: Deactivated successfully. Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.274335998Z" level=info msg="runSandbox: deleting pod ID a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93 from idIndex" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.274377813Z" level=info msg="runSandbox: removing pod sandbox a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.274428685Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.274448978Z" level=info msg="runSandbox: unmounting shmPath for sandbox a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93-userdata-shm.mount: Deactivated successfully. Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.280304980Z" level=info msg="runSandbox: removing pod sandbox from storage: a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.281805201Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:48.281834664Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=6fe1ab61-b0cb-4034-aa42-97f176ba0a11 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:48.282019 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:10:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:48.282085 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:10:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:48.282123 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:10:48 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:48.282217 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a1074fcca2c2683c5942fda297d2a8888c59d961ee409290b17a7a4693c63d93): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:10:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:10:49.216559 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:10:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:49.217001 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:56.292591 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:56.292822 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:56.293048 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:10:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:10:56.293079 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:57.236866187Z" level=info msg="NetworkStart: stopping network for sandbox 5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:57.236987992Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/001a656a-17fd-4863-b53f-af015f286c5c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:57.237014936Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:57.237025077Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:10:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:10:57.237033489Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:11:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:01.216650 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:11:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:01.216766 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:11:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:01.217109424Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:01.217186273Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:11:01 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:01.217282 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:11:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:01.222394027Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/a1b318a4-6479-43bf-ba1d-6aaacd076101 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:11:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:01.222422008Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.248528545Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.248576789Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.248761786Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.248791727Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 systemd[1]: run-utsns-e575449d\x2d608b\x2d437a\x2d8f1d\x2dc8cdf2a9d985.mount: Deactivated successfully. Feb 23 20:11:07 ip-10-0-136-68 systemd[1]: run-utsns-2dd10eb4\x2de078\x2d45b7\x2d9c97\x2da666d5212781.mount: Deactivated successfully. Feb 23 20:11:07 ip-10-0-136-68 systemd[1]: run-ipcns-e575449d\x2d608b\x2d437a\x2d8f1d\x2dc8cdf2a9d985.mount: Deactivated successfully. Feb 23 20:11:07 ip-10-0-136-68 systemd[1]: run-ipcns-2dd10eb4\x2de078\x2d45b7\x2d9c97\x2da666d5212781.mount: Deactivated successfully. Feb 23 20:11:07 ip-10-0-136-68 systemd[1]: run-netns-e575449d\x2d608b\x2d437a\x2d8f1d\x2dc8cdf2a9d985.mount: Deactivated successfully. Feb 23 20:11:07 ip-10-0-136-68 systemd[1]: run-netns-2dd10eb4\x2de078\x2d45b7\x2d9c97\x2da666d5212781.mount: Deactivated successfully. Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.270322944Z" level=info msg="runSandbox: deleting pod ID d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a from idIndex" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.270356226Z" level=info msg="runSandbox: removing pod sandbox d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.270386480Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.270399488Z" level=info msg="runSandbox: unmounting shmPath for sandbox d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.270341644Z" level=info msg="runSandbox: deleting pod ID d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1 from idIndex" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.270446808Z" level=info msg="runSandbox: removing pod sandbox d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.270463367Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.270475368Z" level=info msg="runSandbox: unmounting shmPath for sandbox d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.276300431Z" level=info msg="runSandbox: removing pod sandbox from storage: d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.277925461Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.277957807Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=04b2b524-e1e8-433d-b3e1-5540ac7c6fe0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:07.278177 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:11:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:07.278240 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:11:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:07.278318 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:11:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:07.278393 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.280301275Z" level=info msg="runSandbox: removing pod sandbox from storage: d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.281691166Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:07.281722939Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=a203166d-23b8-4bc5-830a-ac0fae509cff name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:07.281879 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:11:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:07.281925 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:11:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:07.281962 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:11:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:07.282010 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:11:08 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d005c5572327f2a9179732a5babfe296a04b210309ef7d76479c71415989933a-userdata-shm.mount: Deactivated successfully. Feb 23 20:11:08 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-d45b3fec1cc618c8e6c0a36de20fc86a810f777094a083f6f661dee6996725d1-userdata-shm.mount: Deactivated successfully. Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.246045157Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.246096212Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 systemd[1]: run-utsns-05540bcf\x2d2838\x2d45b7\x2d9b54\x2d25288ffb6ad3.mount: Deactivated successfully. Feb 23 20:11:09 ip-10-0-136-68 systemd[1]: run-ipcns-05540bcf\x2d2838\x2d45b7\x2d9b54\x2d25288ffb6ad3.mount: Deactivated successfully. Feb 23 20:11:09 ip-10-0-136-68 systemd[1]: run-netns-05540bcf\x2d2838\x2d45b7\x2d9b54\x2d25288ffb6ad3.mount: Deactivated successfully. Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.279344904Z" level=info msg="runSandbox: deleting pod ID 5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5 from idIndex" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.279385404Z" level=info msg="runSandbox: removing pod sandbox 5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.279434678Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.279448759Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5-userdata-shm.mount: Deactivated successfully. Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.285311031Z" level=info msg="runSandbox: removing pod sandbox from storage: 5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.286864117Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:09.286892998Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=2e90d1d5-e894-4778-b482-0408dc8ca34e name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:09.287084 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:11:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:09.287136 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:11:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:09.287165 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:11:09 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:09.287222 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(5e539369031383189570682cdeec9e029ffb026a7ee9443e506137d9463f21f5): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:11:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:15.216837 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:11:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:15.217271 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:11:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:20.217229 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:11:20 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:20.217632 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:20.217681946Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:20.217735699Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:20.218022195Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:20.218078891Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:20.225483868Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/67cb3160-aebe-4e2a-ac43-14105be2b01c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:20.225670083Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:20.226010117Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/edc9637c-d354-449d-83ab-6aea7e79b805 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:11:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:20.226046070Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:11:22 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:22.216791 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:11:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:22.217209516Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:22.217296247Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:11:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:22.226149190Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6a3f9a72-7d10-4208-b242-b9ebaa5cf824 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:11:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:22.226176674Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:26.291749 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:26.292079 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:26.292333 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:11:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:26.292389 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:11:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:27.217360 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:11:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:27.217754 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:11:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:41.216835 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:11:41 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:41.217199 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.246063982Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.246106714Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 systemd[1]: run-utsns-001a656a\x2d17fd\x2d4863\x2db53f\x2daf015f286c5c.mount: Deactivated successfully. Feb 23 20:11:42 ip-10-0-136-68 systemd[1]: run-ipcns-001a656a\x2d17fd\x2d4863\x2db53f\x2daf015f286c5c.mount: Deactivated successfully. Feb 23 20:11:42 ip-10-0-136-68 systemd[1]: run-netns-001a656a\x2d17fd\x2d4863\x2db53f\x2daf015f286c5c.mount: Deactivated successfully. Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.270351836Z" level=info msg="runSandbox: deleting pod ID 5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651 from idIndex" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.270399810Z" level=info msg="runSandbox: removing pod sandbox 5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.270429260Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.270442664Z" level=info msg="runSandbox: unmounting shmPath for sandbox 5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651-userdata-shm.mount: Deactivated successfully. Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.283333349Z" level=info msg="runSandbox: removing pod sandbox from storage: 5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.284997981Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:42.285029626Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=127686a6-1863-40b9-b3cb-830db05b309b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:42.285298 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:11:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:42.285357 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:11:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:42.285381 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:11:42 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:42.285454 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(5b3f026cf57a56a9f3a015ac1e02753667e26a76f5ac54173fd835d1afd4c651): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:11:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:46.233981148Z" level=info msg="NetworkStart: stopping network for sandbox 07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:46.234100193Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/a1b318a4-6479-43bf-ba1d-6aaacd076101 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:11:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:46.234141377Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:11:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:46.234152577Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:11:46 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:46.234162912Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:11:52 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:52.217404 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.218216508Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=edc208c2-f4ca-411c-aeac-b27105da34ea name=/runtime.v1.ImageService/ImageStatus Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.218457634Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=edc208c2-f4ca-411c-aeac-b27105da34ea name=/runtime.v1.ImageService/ImageStatus Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.219095319Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=e0be10e7-c1fd-4a29-bab8-b99df062d4f3 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.219322460Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=e0be10e7-c1fd-4a29-bab8-b99df062d4f3 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.219972626Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d4ff786f-1bba-4205-b495-e33d0cc3b242 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.220062020Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:11:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3.scope. Feb 23 20:11:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3. Feb 23 20:11:52 ip-10-0-136-68 conmon[22070]: conmon 6cc6261e45b7fedc68fd : Failed to write to cgroup.event_control Operation not supported Feb 23 20:11:52 ip-10-0-136-68 systemd[1]: crio-conmon-6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3.scope: Deactivated successfully. Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.357802231Z" level=info msg="Created container 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d4ff786f-1bba-4205-b495-e33d0cc3b242 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.358237761Z" level=info msg="Starting container: 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3" id=7dcf7142-0757-4c63-8b4e-fde5d394b2c5 name=/runtime.v1.RuntimeService/StartContainer Feb 23 20:11:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:52.365402342Z" level=info msg="Started container" PID=22082 containerID=6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=7dcf7142-0757-4c63-8b4e-fde5d394b2c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 20:11:52 ip-10-0-136-68 systemd[1]: crio-6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3.scope: Deactivated successfully. Feb 23 20:11:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:56.156960484Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=f0499771-5023-4598-bd40-633c474b436f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:11:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:56.157974 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3} Feb 23 20:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:56.292654 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:56.292970 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:56.293188 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:11:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:11:56.293217 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:11:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:11:57.216674 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:11:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:57.217004559Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:11:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:57.217062997Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:11:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:57.222526253Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ec6c4945-3213-43f7-882b-0a93228ef6ac Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:11:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:11:57.222563800Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:12:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:04.872486 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:12:04 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:04.872562 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240347262Z" level=info msg="NetworkStart: stopping network for sandbox c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240485353Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/edc9637c-d354-449d-83ab-6aea7e79b805 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240525489Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240537920Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240547832Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240685879Z" level=info msg="NetworkStart: stopping network for sandbox 4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240788671Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/67cb3160-aebe-4e2a-ac43-14105be2b01c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240823410Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240835126Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:12:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:05.240845376Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:12:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:07.239446726Z" level=info msg="NetworkStart: stopping network for sandbox 016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:07.239583973Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/6a3f9a72-7d10-4208-b242-b9ebaa5cf824 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:12:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:07.239624551Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:12:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:07.239637346Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:12:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:07.239648755Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:12:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:10.217764 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:10.218109 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:10.218437 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:10.218480 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:12:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:14.872968 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:12:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:14.873037 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:12:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:24.872303 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:12:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:24.872366 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:26.292641 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:26.292939 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:26.293156 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:26.293194 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.244144108Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.244188678Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 systemd[1]: run-utsns-a1b318a4\x2d6479\x2d43bf\x2dba1d\x2d6aaacd076101.mount: Deactivated successfully. Feb 23 20:12:31 ip-10-0-136-68 systemd[1]: run-ipcns-a1b318a4\x2d6479\x2d43bf\x2dba1d\x2d6aaacd076101.mount: Deactivated successfully. Feb 23 20:12:31 ip-10-0-136-68 systemd[1]: run-netns-a1b318a4\x2d6479\x2d43bf\x2dba1d\x2d6aaacd076101.mount: Deactivated successfully. Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.263331863Z" level=info msg="runSandbox: deleting pod ID 07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21 from idIndex" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.263370482Z" level=info msg="runSandbox: removing pod sandbox 07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.263399193Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.263412655Z" level=info msg="runSandbox: unmounting shmPath for sandbox 07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21-userdata-shm.mount: Deactivated successfully. Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.267308350Z" level=info msg="runSandbox: removing pod sandbox from storage: 07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.268911148Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:31.268941748Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=96ff8ca1-7ecb-4bdf-b848-8852512505da name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:31.269181 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:12:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:31.269267 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:12:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:31.269323 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:12:31 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:31.269384 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(07ac24ca92f09b698648955a826502745b25d1a170b68c38203a0b534d889b21): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:12:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:34.872696 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:12:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:34.872761 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:42.234070177Z" level=info msg="NetworkStart: stopping network for sandbox 46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:42.234200159Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/ec6c4945-3213-43f7-882b-0a93228ef6ac Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:42.234237716Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:42.234270434Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:12:42 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:42.234280474Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:12:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:44.217214 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:12:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:44.217727269Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:44.217797692Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:12:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:44.223745352Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/26602122-7f3f-4a22-b24d-38cb31d3b5ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:12:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:44.223771845Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:12:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:44.872564 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:12:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:44.872618 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:12:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:44.872645 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 20:12:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:44.873166 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 20:12:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:44.873385 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3" gracePeriod=30 Feb 23 20:12:44 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:44.873617638Z" level=info msg="Stopping container: 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3 (timeout: 30s)" id=ec06a92e-3a06-47aa-9859-2384978f8d2f name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:12:48 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:48.635001580Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=ec06a92e-3a06-47aa-9859-2384978f8d2f name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:12:48 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-c4c0fa4f80632e6aabcddc25f2d705e898a63722173fa045dc444e2f0c83a54c-merged.mount: Deactivated successfully. Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.251325243Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.251388189Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.251841973Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.251874807Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 systemd[1]: run-utsns-edc9637c\x2dd354\x2d449d\x2d83ab\x2d6aea7e79b805.mount: Deactivated successfully. Feb 23 20:12:50 ip-10-0-136-68 systemd[1]: run-utsns-67cb3160\x2daebe\x2d4e2a\x2dac43\x2d14105be2b01c.mount: Deactivated successfully. Feb 23 20:12:50 ip-10-0-136-68 systemd[1]: run-ipcns-edc9637c\x2dd354\x2d449d\x2d83ab\x2d6aea7e79b805.mount: Deactivated successfully. Feb 23 20:12:50 ip-10-0-136-68 systemd[1]: run-ipcns-67cb3160\x2daebe\x2d4e2a\x2dac43\x2d14105be2b01c.mount: Deactivated successfully. Feb 23 20:12:50 ip-10-0-136-68 systemd[1]: run-netns-edc9637c\x2dd354\x2d449d\x2d83ab\x2d6aea7e79b805.mount: Deactivated successfully. Feb 23 20:12:50 ip-10-0-136-68 systemd[1]: run-netns-67cb3160\x2daebe\x2d4e2a\x2dac43\x2d14105be2b01c.mount: Deactivated successfully. Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.282356647Z" level=info msg="runSandbox: deleting pod ID 4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e from idIndex" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.282403797Z" level=info msg="runSandbox: removing pod sandbox 4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.282445424Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.282464967Z" level=info msg="runSandbox: unmounting shmPath for sandbox 4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.282360055Z" level=info msg="runSandbox: deleting pod ID c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690 from idIndex" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.282527926Z" level=info msg="runSandbox: removing pod sandbox c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.282566005Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.282587887Z" level=info msg="runSandbox: unmounting shmPath for sandbox c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.290299581Z" level=info msg="runSandbox: removing pod sandbox from storage: c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.291320048Z" level=info msg="runSandbox: removing pod sandbox from storage: 4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.292063210Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.292179567Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=b1f22ee5-50e9-49f0-82de-e24d0ffb092a name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.294838658Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:50.294950851Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=7402d723-aa3e-4ce5-bb6b-00ed094e36ea name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:50.295793 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:12:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:50.295871 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:12:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:50.295905 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:12:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:50.295979 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:12:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:50.295987 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:12:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:50.296024 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:12:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:50.296052 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:12:50 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:50.296153 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:12:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-c2ee9d4eb7f7919e8f32c0565707ba9ced6a1dae06374f0548cd52d73f425690-userdata-shm.mount: Deactivated successfully. Feb 23 20:12:51 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-4bd951ea0bf56625a84386cf6416c632ae2d8944a2e9da6f35fab2685790560e-userdata-shm.mount: Deactivated successfully. Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.249656405Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.249704181Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 systemd[1]: run-utsns-6a3f9a72\x2d7d10\x2d4208\x2db242\x2db9ebaa5cf824.mount: Deactivated successfully. Feb 23 20:12:52 ip-10-0-136-68 systemd[1]: run-ipcns-6a3f9a72\x2d7d10\x2d4208\x2db242\x2db9ebaa5cf824.mount: Deactivated successfully. Feb 23 20:12:52 ip-10-0-136-68 systemd[1]: run-netns-6a3f9a72\x2d7d10\x2d4208\x2db242\x2db9ebaa5cf824.mount: Deactivated successfully. Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.265317051Z" level=info msg="runSandbox: deleting pod ID 016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0 from idIndex" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.265355477Z" level=info msg="runSandbox: removing pod sandbox 016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.265393122Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.265413989Z" level=info msg="runSandbox: unmounting shmPath for sandbox 016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0-userdata-shm.mount: Deactivated successfully. Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.282304609Z" level=info msg="runSandbox: removing pod sandbox from storage: 016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.283898227Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.283930105Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=452db2e5-8a5e-43e1-8ed7-e7740bf4a809 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:12:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:52.284179 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:12:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:52.284237 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:12:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:52.284302 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:12:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:52.284370 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(016c5e014e6ba110cd0ee7e0885cd795b62520e59418177d78c1d8dc1310e0d0): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.410088406Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=ec06a92e-3a06-47aa-9859-2384978f8d2f name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.413089967Z" level=info msg="Stopped container 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=ec06a92e-3a06-47aa-9859-2384978f8d2f name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.413788366Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=5e40ab21-d90b-4c21-b5e4-b242b8f697ac name=/runtime.v1.ImageService/ImageStatus Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.413950990Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=5e40ab21-d90b-4c21-b5e4-b242b8f697ac name=/runtime.v1.ImageService/ImageStatus Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.414523466Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=37641f4c-bc44-4c96-b3ba-ad8515d34aef name=/runtime.v1.ImageService/ImageStatus Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.414688257Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=37641f4c-bc44-4c96-b3ba-ad8515d34aef name=/runtime.v1.ImageService/ImageStatus Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.415331417Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=61508f56-c0a2-434c-b796-76308a00fd03 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.415419446Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:12:52 ip-10-0-136-68 systemd[1]: Started crio-conmon-4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2.scope. Feb 23 20:12:52 ip-10-0-136-68 systemd[1]: Started libcontainer container 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2. Feb 23 20:12:52 ip-10-0-136-68 conmon[22240]: conmon 4a114f57874110ff9952 : Failed to write to cgroup.event_control Operation not supported Feb 23 20:12:52 ip-10-0-136-68 systemd[1]: crio-conmon-4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2.scope: Deactivated successfully. Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.553394276Z" level=info msg="Created container 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=61508f56-c0a2-434c-b796-76308a00fd03 name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.553997576Z" level=info msg="Starting container: 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" id=280d5773-2822-4bd7-a8df-9f435807fa32 name=/runtime.v1.RuntimeService/StartContainer Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.561655173Z" level=info msg="Started container" PID=22252 containerID=4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2 description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=280d5773-2822-4bd7-a8df-9f435807fa32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 20:12:52 ip-10-0-136-68 systemd[1]: crio-4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2.scope: Deactivated successfully. Feb 23 20:12:52 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:52.986341722Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=194f867d-a7ca-43bc-8ed5-dceeb41301e3 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:56.292229 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:56.292523 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:56.292722 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:12:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:12:56.292746 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:12:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:12:56.736024336Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=a23c25b8-ba8c-4fd2-be0a-088a72855159 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:12:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:56.737036 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3" exitCode=-1 Feb 23 20:12:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:56.737076 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3} Feb 23 20:12:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:12:56.737116 2199 scope.go:115] "RemoveContainer" containerID="f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" Feb 23 20:13:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:00.484940266Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=020a6e2c-d71b-4c30-8445-0d671a07af4f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:13:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:01.489199804Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=b1a199c8-f030-48c6-81cb-7f804257c821 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:13:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:04.237054988Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=b9f404f5-c46f-4cad-8f16-c45aa5952a06 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:13:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:04.237452692Z" level=info msg="Removing container: f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0" id=b48341d2-4dbc-413f-b9fa-1ad6dbd716a7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:13:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:05.228008287Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=a2818341-fccd-490e-ab40-ff9ecfad1ad4 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:13:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:05.229046 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2} Feb 23 20:13:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:05.229296 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:13:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:05.229606968Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:05.229660492Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:13:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:05.235153941Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/0c1512c9-2d50-4d02-b07e-23265b00ecc3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:13:05 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:05.235187786Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:13:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:06.216775 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:13:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:06.216878 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:13:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:06.217201291Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:06.217297747Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:13:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:06.217201636Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:06.217748492Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:13:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:06.224976787Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7722fd01-b09f-4d93-b059-e7d0d15500ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:13:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:06.225013344Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:13:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:06.226087916Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/4046d32a-5607-4f62-a91c-70a4237aad0c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:13:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:06.226118922Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:13:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:07.999263788Z" level=warning msg="Failed to find container exit file for f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: timed out waiting for the condition" id=b48341d2-4dbc-413f-b9fa-1ad6dbd716a7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:13:08 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:08.012364428Z" level=info msg="Removed container f593d401d06964c92f54c75f4c95111cb614feacd6ec24eb96f0e3bc8aa5f8c0: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b48341d2-4dbc-413f-b9fa-1ad6dbd716a7 name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:13:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:11.996033584Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=70ecde06-642b-4bd2-917c-5fb86c41e197 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:13:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:14.872821 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:13:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:14.872875 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:13:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:24.871968 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:13:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:24.872021 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:26.292370 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:26.292625 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:26.292837 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:26.292871 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.244178767Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.244231032Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 systemd[1]: run-utsns-ec6c4945\x2d3213\x2d43f7\x2d882b\x2d0a93228ef6ac.mount: Deactivated successfully. Feb 23 20:13:27 ip-10-0-136-68 systemd[1]: run-ipcns-ec6c4945\x2d3213\x2d43f7\x2d882b\x2d0a93228ef6ac.mount: Deactivated successfully. Feb 23 20:13:27 ip-10-0-136-68 systemd[1]: run-netns-ec6c4945\x2d3213\x2d43f7\x2d882b\x2d0a93228ef6ac.mount: Deactivated successfully. Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.271340530Z" level=info msg="runSandbox: deleting pod ID 46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776 from idIndex" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.271381202Z" level=info msg="runSandbox: removing pod sandbox 46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.271425795Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.271438968Z" level=info msg="runSandbox: unmounting shmPath for sandbox 46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776-userdata-shm.mount: Deactivated successfully. Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.277307841Z" level=info msg="runSandbox: removing pod sandbox from storage: 46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.279162889Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:27.279194098Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=1efb7391-35a7-4ec9-9e48-c58b24f9189c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:27.279461 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:13:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:27.279522 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:13:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:27.279546 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:13:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:27.279611 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(46c8c027fca6299b196028038d7436d4c81c531ce635ac30998e0734091b0776): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:13:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:29.236358037Z" level=info msg="NetworkStart: stopping network for sandbox a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:29.236526875Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/26602122-7f3f-4a22-b24d-38cb31d3b5ed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:13:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:29.236563732Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:13:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:29.236572857Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:13:29 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:29.236580429Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:13:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:34.217501 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:34.217751 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:34.217934 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:34.217965 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:13:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:34.872771 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:13:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:34.872829 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:13:41 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:41.217113 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:13:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:41.217499872Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:41.217573697Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:13:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:41.222997324Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/e00ecad7-a2dd-4ed7-96ff-cb84760b1d6f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:13:41 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:41.223031699Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:13:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:44.872852 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:13:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:44.872915 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:13:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:50.247050854Z" level=info msg="NetworkStart: stopping network for sandbox be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:50.247154631Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/0c1512c9-2d50-4d02-b07e-23265b00ecc3 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:13:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:50.247181093Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:13:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:50.247188028Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:13:50 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:50.247194713Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.238432941Z" level=info msg="NetworkStart: stopping network for sandbox 7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.238555011Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/7722fd01-b09f-4d93-b059-e7d0d15500ce Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.238584010Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.238591574Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.238599074Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.239754111Z" level=info msg="NetworkStart: stopping network for sandbox f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.239844655Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/4046d32a-5607-4f62-a91c-70a4237aad0c Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.239877526Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.239887157Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:13:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:51.239895795Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:13:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:54.872433 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:13:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:54.872493 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:13:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:54.872522 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 20:13:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:54.873028 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 20:13:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:13:54.873186 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" gracePeriod=30 Feb 23 20:13:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:54.873453192Z" level=info msg="Stopping container: 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2 (timeout: 30s)" id=12403f57-0d53-4343-90ea-74e3a079455a name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:56.292027 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:56.292287 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:56.292504 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:13:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:13:56.292560 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:13:58 ip-10-0-136-68 sshd[22471]: main: sshd: ssh-rsa algorithm is disabled Feb 23 20:13:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:13:58.635161929Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=12403f57-0d53-4343-90ea-74e3a079455a name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:13:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-560de3d85e2623316e019445a0693da4cc112f5aab63749fefed6dc6f0dd9a8f-merged.mount: Deactivated successfully. Feb 23 20:13:59 ip-10-0-136-68 sshd[22471]: Accepted publickey for core from 10.0.182.221 port 51264 ssh2: RSA SHA256:Ez+JFROVIkSQ/eAziisgy16VY49IFSr8A84gQk7WcPc Feb 23 20:13:59 ip-10-0-136-68 systemd-logind[985]: New session 5 of user core. Feb 23 20:13:59 ip-10-0-136-68 systemd[1]: Created slice User Slice of UID 1000. Feb 23 20:13:59 ip-10-0-136-68 systemd[1]: Starting User Runtime Directory /run/user/1000... Feb 23 20:13:59 ip-10-0-136-68 systemd[1]: Finished User Runtime Directory /run/user/1000. Feb 23 20:13:59 ip-10-0-136-68 systemd[1]: Starting User Manager for UID 1000... Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: pam_unix(systemd-user:session): session opened for user core(uid=1000) by (uid=0) Feb 23 20:13:59 ip-10-0-136-68 systemd[22501]: /usr/lib/systemd/user-generators/podman-user-generator failed with exit status 1. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Queued start job for default target Main User Target. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Created slice User Application Slice. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Started Daily Cleanup of User's Temporary Directories. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Reached target Paths. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Reached target Timers. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Starting D-Bus User Message Bus Socket... Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Starting Create User's Volatile Files and Directories... Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Listening on D-Bus User Message Bus Socket. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Reached target Sockets. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Finished Create User's Volatile Files and Directories. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Reached target Basic System. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Reached target Main User Target. Feb 23 20:13:59 ip-10-0-136-68 systemd[22495]: Startup finished in 101ms. Feb 23 20:13:59 ip-10-0-136-68 systemd[1]: Started User Manager for UID 1000. Feb 23 20:13:59 ip-10-0-136-68 systemd[1]: Started Session 5 of User core. Feb 23 20:13:59 ip-10-0-136-68 sshd[22471]: pam_unix(sshd:session): session opened for user core(uid=1000) by (uid=0) Feb 23 20:13:59 ip-10-0-136-68 sudo[22514]: core : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/bash Feb 23 20:13:59 ip-10-0-136-68 sudo[22514]: pam_unix(sudo-i:session): session opened for user root(uid=0) by core(uid=1000) Feb 23 20:13:59 ip-10-0-136-68 systemd[1]: Starting Hostname Service... Feb 23 20:13:59 ip-10-0-136-68 systemd[1]: Started Hostname Service. Feb 23 20:14:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:02.423225015Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=12403f57-0d53-4343-90ea-74e3a079455a name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:14:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:02.424997995Z" level=info msg="Stopped container 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=12403f57-0d53-4343-90ea-74e3a079455a name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:14:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:02.425555 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:14:02 ip-10-0-136-68 systemd[1]: Starting rpm-ostree System Management Daemon... Feb 23 20:14:02 ip-10-0-136-68 rpm-ostree[22577]: Reading config file '/etc/rpm-ostreed.conf' Feb 23 20:14:02 ip-10-0-136-68 rpm-ostree[22577]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 20:14:02 ip-10-0-136-68 rpm-ostree[22577]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 20:14:02 ip-10-0-136-68 rpm-ostree[22577]: In idle state; will auto-exit in 63 seconds Feb 23 20:14:02 ip-10-0-136-68 systemd[1]: Started rpm-ostree System Management Daemon. Feb 23 20:14:02 ip-10-0-136-68 rpm-ostree[22577]: client(id:cli dbus:1.373 unit:session-5.scope uid:0) added; new total=1 Feb 23 20:14:02 ip-10-0-136-68 rpm-ostree[22577]: failed to query container image base metadata: Missing base image ref ostree/container/blob/sha256_3A_4f92d094360fb582b58beaa7fd99fcdcab8b2af5cfe78b5cc5d9b36be254c3b7 Feb 23 20:14:02 ip-10-0-136-68 rpm-ostree[22577]: client(id:cli dbus:1.373 unit:session-5.scope uid:0) vanished; remaining=0 Feb 23 20:14:02 ip-10-0-136-68 rpm-ostree[22577]: In idle state; will auto-exit in 62 seconds Feb 23 20:14:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:02.808086776Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=343523ed-43fc-4d32-b169-52819a83d0b9 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:14:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:06.569021016Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=c40542b0-e704-40b4-8002-cb77645cdc34 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:14:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:06.570020 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" exitCode=-1 Feb 23 20:14:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:06.570091 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2} Feb 23 20:14:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:06.570199 2199 scope.go:115] "RemoveContainer" containerID="6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3" Feb 23 20:14:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:07.571892 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:14:07 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:07.572352 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:14:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:10.331013248Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=092c34f5-72fb-4221-b70c-86d1a981fd41 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.092082440Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=41d817f9-a123-4f31-adab-aba2a96d84f0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.092711659Z" level=info msg="Removing container: 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3" id=1ea13d98-ab26-40d8-a22f-a1a9b2e751eb name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.247534525Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.247584014Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 systemd[1]: run-utsns-26602122\x2d7f3f\x2d4a22\x2db24d\x2d38cb31d3b5ed.mount: Deactivated successfully. Feb 23 20:14:14 ip-10-0-136-68 systemd[1]: run-ipcns-26602122\x2d7f3f\x2d4a22\x2db24d\x2d38cb31d3b5ed.mount: Deactivated successfully. Feb 23 20:14:14 ip-10-0-136-68 systemd[1]: run-netns-26602122\x2d7f3f\x2d4a22\x2db24d\x2d38cb31d3b5ed.mount: Deactivated successfully. Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.267376519Z" level=info msg="runSandbox: deleting pod ID a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617 from idIndex" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.267565745Z" level=info msg="runSandbox: removing pod sandbox a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.267605562Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.267623493Z" level=info msg="runSandbox: unmounting shmPath for sandbox a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617-userdata-shm.mount: Deactivated successfully. Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.272321558Z" level=info msg="runSandbox: removing pod sandbox from storage: a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.274386356Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:14.274504934Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=e0356212-3172-4f31-87bc-50fee9ec8d7c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:14.274804 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:14:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:14.274875 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:14:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:14.274908 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:14:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:14.274986 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(a90f28445cf33ec8c86182b2a872ea53bb845b3e8ca91ac7876acdf6db642617): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:14:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:17.842183650Z" level=warning msg="Failed to find container exit file for 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: timed out waiting for the condition" id=1ea13d98-ab26-40d8-a22f-a1a9b2e751eb name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:14:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:17.867553039Z" level=info msg="Removed container 6cc6261e45b7fedc68fd8126e099bfcd20c486f61680a28cbda831d15e0f37f3: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=1ea13d98-ab26-40d8-a22f-a1a9b2e751eb name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:14:21 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:21.216675 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:14:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:21.217371 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:14:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:22.339146115Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=5467f137-d2a2-4ad7-8a7c-e932bb9afca1 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:14:26 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:26.216566 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.217001865Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.217068975Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.224477129Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/993ee647-f019-4ffb-b90b-79a042255ac1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.224617763Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.235500583Z" level=info msg="NetworkStart: stopping network for sandbox 7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.235596377Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/e00ecad7-a2dd-4ed7-96ff-cb84760b1d6f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.235630042Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.235641357Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:14:26 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:26.235651032Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:26.292772 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:26.293056 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:26.293350 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:26.293381 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:14:29 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 20:14:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:34.216661 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:14:34 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:34.217038 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.256751159Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.256806054Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 systemd[1]: run-utsns-0c1512c9\x2d2d50\x2d4d02\x2db07e\x2d23265b00ecc3.mount: Deactivated successfully. Feb 23 20:14:35 ip-10-0-136-68 systemd[1]: run-ipcns-0c1512c9\x2d2d50\x2d4d02\x2db07e\x2d23265b00ecc3.mount: Deactivated successfully. Feb 23 20:14:35 ip-10-0-136-68 systemd[1]: run-netns-0c1512c9\x2d2d50\x2d4d02\x2db07e\x2d23265b00ecc3.mount: Deactivated successfully. Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.275320650Z" level=info msg="runSandbox: deleting pod ID be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f from idIndex" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.275353599Z" level=info msg="runSandbox: removing pod sandbox be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.275379829Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.275399446Z" level=info msg="runSandbox: unmounting shmPath for sandbox be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f-userdata-shm.mount: Deactivated successfully. Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.281299105Z" level=info msg="runSandbox: removing pod sandbox from storage: be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.282801670Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:35.282828756Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=0da00f5c-edad-41cc-86c3-d45c32b207fa name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:35.283049 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:14:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:35.283101 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:14:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:35.283130 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:14:35 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:35.283182 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(be31c76a1a1824a316279bcb11b532eb82f70fb7065a84c6bdb16b2dce77d14f): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.248931972Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.249005315Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.249607983Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.249656873Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 systemd[1]: run-utsns-4046d32a\x2d5607\x2d4f62\x2da91c\x2d70a4237aad0c.mount: Deactivated successfully. Feb 23 20:14:36 ip-10-0-136-68 systemd[1]: run-utsns-7722fd01\x2db09f\x2d4d93\x2db059\x2de7d0d15500ce.mount: Deactivated successfully. Feb 23 20:14:36 ip-10-0-136-68 systemd[1]: run-ipcns-4046d32a\x2d5607\x2d4f62\x2da91c\x2d70a4237aad0c.mount: Deactivated successfully. Feb 23 20:14:36 ip-10-0-136-68 systemd[1]: run-ipcns-7722fd01\x2db09f\x2d4d93\x2db059\x2de7d0d15500ce.mount: Deactivated successfully. Feb 23 20:14:36 ip-10-0-136-68 systemd[1]: run-netns-7722fd01\x2db09f\x2d4d93\x2db059\x2de7d0d15500ce.mount: Deactivated successfully. Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.267326529Z" level=info msg="runSandbox: deleting pod ID 7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c from idIndex" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.267365996Z" level=info msg="runSandbox: removing pod sandbox 7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.267407468Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.267427411Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.270311631Z" level=info msg="runSandbox: deleting pod ID f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b from idIndex" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.270342662Z" level=info msg="runSandbox: removing pod sandbox f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.270374797Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.270400524Z" level=info msg="runSandbox: unmounting shmPath for sandbox f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 systemd[1]: run-netns-4046d32a\x2d5607\x2d4f62\x2da91c\x2d70a4237aad0c.mount: Deactivated successfully. Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.273315011Z" level=info msg="runSandbox: removing pod sandbox from storage: 7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.275031970Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.275062579Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=568b1ea2-f29e-4b54-845b-1a4c6b276f00 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b-userdata-shm.mount: Deactivated successfully. Feb 23 20:14:36 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c-userdata-shm.mount: Deactivated successfully. Feb 23 20:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:36.275340 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:36.275420 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:36.275456 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:36.275542 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(7211014f2fd87f0509fab060a72601c0bf654f40084ddbac64793245d73d103c): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.276341509Z" level=info msg="runSandbox: removing pod sandbox from storage: f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.277857116Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:36.277887312Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=83105e74-9133-4073-95ff-6617996cf3e6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:36.278443 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:36.278625 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:36.278757 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:14:36 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:36.278937 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(f911535d9b793daa4e96ac2be919a66e5fac633ba286c9c40b5292ee9c50aa0b): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:14:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:47.216634 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:14:47 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:47.216753 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:14:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:47.217148 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:14:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:47.217146492Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:47.217217373Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:14:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:47.222895921Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f5faf7d2-a26d-40de-ad27-fbe4be493d0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:14:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:47.222923319Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:14:49 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:49.217307 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:14:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:49.217705235Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:49.217763127Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:14:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:49.223212463Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/daf941db-2f45-495d-817e-b306123b6712 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:14:49 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:49.223239661Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:14:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:14:51.216980 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:14:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:51.217434303Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:14:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:51.217512154Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:14:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:51.223049396Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/09cdfa97-8e54-492d-b03d-f6b64bc9af98 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:14:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:14:51.223077700Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:14:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:52.217371 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:52.217655 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:52.217845 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:52 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:52.217877 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:56.292543 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:56.292780 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:56.293018 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:14:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:14:56.293052 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:15:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:15:02.216876 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:15:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:02.217509 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:15:05 ip-10-0-136-68 systemd[1]: rpm-ostreed.service: Deactivated successfully. Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.238442306Z" level=info msg="NetworkStart: stopping network for sandbox 49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.238750441Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/993ee647-f019-4ffb-b90b-79a042255ac1 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.238785013Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.238793891Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.238801681Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.245137924Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.245179763Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 systemd[1]: run-utsns-e00ecad7\x2da2dd\x2d4ed7\x2d96ff\x2dcb84760b1d6f.mount: Deactivated successfully. Feb 23 20:15:11 ip-10-0-136-68 systemd[1]: run-ipcns-e00ecad7\x2da2dd\x2d4ed7\x2d96ff\x2dcb84760b1d6f.mount: Deactivated successfully. Feb 23 20:15:11 ip-10-0-136-68 systemd[1]: run-netns-e00ecad7\x2da2dd\x2d4ed7\x2d96ff\x2dcb84760b1d6f.mount: Deactivated successfully. Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.265324303Z" level=info msg="runSandbox: deleting pod ID 7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f from idIndex" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.265357721Z" level=info msg="runSandbox: removing pod sandbox 7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.265385449Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.265401132Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f-userdata-shm.mount: Deactivated successfully. Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.271331177Z" level=info msg="runSandbox: removing pod sandbox from storage: 7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.272916902Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:11.272950116Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=7c7a19f0-89f5-45bc-891a-a32ed153e973 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:11.273162 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:15:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:11.273391 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:15:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:11.273431 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:15:11 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:11.273522 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7e972a29ad36032fd7b84dd0fc81763b7e25d87c07e194af67e1ca16a300200f): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:15:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:15:15.217085 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:15:15 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:15.217514 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:15:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:20.263934654Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=c9f5b4d3-c8fd-412e-8b56-9266c04d822f name=/runtime.v1.ImageService/ImageStatus Feb 23 20:15:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:20.264116200Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=c9f5b4d3-c8fd-412e-8b56-9266c04d822f name=/runtime.v1.ImageService/ImageStatus Feb 23 20:15:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:15:24.216518 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:15:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:24.216997760Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:24.217067704Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:15:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:24.224303891Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/9bad15c4-8fe1-43f9-921e-cc94b1bf4801 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:15:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:24.224343326Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:26.292033 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:26.292360 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:26.292576 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:26.292603 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:15:27 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:15:27.217434 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:15:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:27.217840 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:15:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:32.235408555Z" level=info msg="NetworkStart: stopping network for sandbox 3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:32.235524744Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/f5faf7d2-a26d-40de-ad27-fbe4be493d0f Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:15:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:32.235551714Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:15:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:32.235558512Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:15:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:32.235565072Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:15:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:34.235419082Z" level=info msg="NetworkStart: stopping network for sandbox f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:34.235517967Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/daf941db-2f45-495d-817e-b306123b6712 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:15:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:34.235552237Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:15:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:34.235560111Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:15:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:34.235566548Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:15:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:36.235117014Z" level=info msg="NetworkStart: stopping network for sandbox 90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:36.235234538Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/09cdfa97-8e54-492d-b03d-f6b64bc9af98 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:15:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:36.235287589Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:15:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:36.235297400Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:15:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:36.235304629Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:15:40 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:15:40.216649 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:15:40 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:40.217187 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:15:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:53.216929 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:53.217701 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:53.218007 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:53 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:53.218050 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:15:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:15:54.216979 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:15:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:54.217600 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.249436550Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.249485620Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 systemd[1]: run-utsns-993ee647\x2df019\x2d4ffb\x2db90b\x2d79a042255ac1.mount: Deactivated successfully. Feb 23 20:15:56 ip-10-0-136-68 systemd[1]: run-ipcns-993ee647\x2df019\x2d4ffb\x2db90b\x2d79a042255ac1.mount: Deactivated successfully. Feb 23 20:15:56 ip-10-0-136-68 systemd[1]: run-netns-993ee647\x2df019\x2d4ffb\x2db90b\x2d79a042255ac1.mount: Deactivated successfully. Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.272332610Z" level=info msg="runSandbox: deleting pod ID 49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b from idIndex" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.272369605Z" level=info msg="runSandbox: removing pod sandbox 49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.272399908Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.272413868Z" level=info msg="runSandbox: unmounting shmPath for sandbox 49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b-userdata-shm.mount: Deactivated successfully. Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.276331110Z" level=info msg="runSandbox: removing pod sandbox from storage: 49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.277900298Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:15:56.277933842Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=fbdc08e9-859e-4ced-a330-57c444a2ca57 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:56.278152 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:56.278207 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:56.278231 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:56.278354 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(49225e90ca6a74ec3aaa636b274f998c6030e7d5c8d23c9f701b063ece27c53b): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:56.291703 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:56.291993 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:56.292211 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:15:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:15:56.292265 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:16:05 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:05.217279 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:16:05 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:05.217677 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:16:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:09.216899 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.217331518Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.217389071Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.222823395Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4d20cb9d-668b-485f-9860-8ff444597161 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.222850775Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.238286756Z" level=info msg="NetworkStart: stopping network for sandbox 7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.238378012Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9 UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/9bad15c4-8fe1-43f9-921e-cc94b1bf4801 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.238407650Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.238414820Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:16:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:09.238421521Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.244529934Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.244802979Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 systemd[1]: run-utsns-f5faf7d2\x2da26d\x2d40de\x2dad27\x2dfbe4be493d0f.mount: Deactivated successfully. Feb 23 20:16:17 ip-10-0-136-68 systemd[1]: run-ipcns-f5faf7d2\x2da26d\x2d40de\x2dad27\x2dfbe4be493d0f.mount: Deactivated successfully. Feb 23 20:16:17 ip-10-0-136-68 systemd[1]: run-netns-f5faf7d2\x2da26d\x2d40de\x2dad27\x2dfbe4be493d0f.mount: Deactivated successfully. Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.263323264Z" level=info msg="runSandbox: deleting pod ID 3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9 from idIndex" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.263357798Z" level=info msg="runSandbox: removing pod sandbox 3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.263388387Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.263399996Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9-userdata-shm.mount: Deactivated successfully. Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.269310386Z" level=info msg="runSandbox: removing pod sandbox from storage: 3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.270795902Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:17.270826436Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=76f2d440-b67e-455f-a62b-ac5de0f15086 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:17.271044 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:16:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:17.271103 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:16:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:17.271127 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:16:17 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:17.271192 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(3d41e98281245716a2c4076157d80959be9e51a1951caaeb237dbf785291ccc9): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:16:19 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:19.216799 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:16:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:19.217160 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.244725778Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.244771954Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 systemd[1]: run-utsns-daf941db\x2d2f45\x2d495d\x2d817e\x2db306123b6712.mount: Deactivated successfully. Feb 23 20:16:19 ip-10-0-136-68 systemd[1]: run-ipcns-daf941db\x2d2f45\x2d495d\x2d817e\x2db306123b6712.mount: Deactivated successfully. Feb 23 20:16:19 ip-10-0-136-68 systemd[1]: run-netns-daf941db\x2d2f45\x2d495d\x2d817e\x2db306123b6712.mount: Deactivated successfully. Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.279346052Z" level=info msg="runSandbox: deleting pod ID f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793 from idIndex" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.279385898Z" level=info msg="runSandbox: removing pod sandbox f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.279430663Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.279444387Z" level=info msg="runSandbox: unmounting shmPath for sandbox f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793-userdata-shm.mount: Deactivated successfully. Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.287316230Z" level=info msg="runSandbox: removing pod sandbox from storage: f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.288878400Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:19.288910661Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=80189285-3dd2-42a9-918a-d5dc52a64527 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:19.289152 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:16:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:19.289211 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:16:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:19.289269 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:16:19 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:19.289360 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(f88260273435f91e8eaea7971986702f1ff7b3436c8fafbd6d523463e298f793): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:16:19 ip-10-0-136-68 sudo[22514]: pam_unix(sudo-i:session): session closed for user root Feb 23 20:16:19 ip-10-0-136-68 sshd[22471]: pam_unix(sshd:session): session closed for user core Feb 23 20:16:19 ip-10-0-136-68 systemd-logind[985]: Session 5 logged out. Waiting for processes to exit. Feb 23 20:16:19 ip-10-0-136-68 systemd[1]: session-5.scope: Deactivated successfully. Feb 23 20:16:19 ip-10-0-136-68 systemd-logind[985]: Removed session 5. Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.245686621Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.245733251Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 systemd[1]: run-utsns-09cdfa97\x2d8e54\x2d492d\x2db03d\x2df6b64bc9af98.mount: Deactivated successfully. Feb 23 20:16:21 ip-10-0-136-68 systemd[1]: run-ipcns-09cdfa97\x2d8e54\x2d492d\x2db03d\x2df6b64bc9af98.mount: Deactivated successfully. Feb 23 20:16:21 ip-10-0-136-68 systemd[1]: run-netns-09cdfa97\x2d8e54\x2d492d\x2db03d\x2df6b64bc9af98.mount: Deactivated successfully. Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.264328675Z" level=info msg="runSandbox: deleting pod ID 90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047 from idIndex" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.264405994Z" level=info msg="runSandbox: removing pod sandbox 90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.264443473Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.264465869Z" level=info msg="runSandbox: unmounting shmPath for sandbox 90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047-userdata-shm.mount: Deactivated successfully. Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.270303532Z" level=info msg="runSandbox: removing pod sandbox from storage: 90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.272425375Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:21.272462001Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=814c65b9-4ad0-4e63-b97c-fda316a527b9 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:21.275160 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:16:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:21.275429 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:16:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:21.275463 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:16:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:21.275539 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(90eadd8970476b9366448c00446ceae850c22d3934e9f63017e33216a5599047): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:26.291855 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:26.292108 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:26.292331 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:16:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:26.292359 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:16:30 ip-10-0-136-68 systemd[1]: Stopping User Manager for UID 1000... Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Activating special unit Exit the Session... Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Stopped target Main User Target. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Stopped target Basic System. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Stopped target Paths. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Stopped target Sockets. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Stopped target Timers. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Stopped Daily Cleanup of User's Temporary Directories. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Closed D-Bus User Message Bus Socket. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Stopped Create User's Volatile Files and Directories. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Removed slice User Application Slice. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Reached target Shutdown. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Finished Exit the Session. Feb 23 20:16:30 ip-10-0-136-68 systemd[22495]: Reached target Exit the Session. Feb 23 20:16:30 ip-10-0-136-68 systemd[1]: user@1000.service: Deactivated successfully. Feb 23 20:16:30 ip-10-0-136-68 systemd[1]: Stopped User Manager for UID 1000. Feb 23 20:16:30 ip-10-0-136-68 systemd[1]: Stopping User Runtime Directory /run/user/1000... Feb 23 20:16:30 ip-10-0-136-68 systemd[1]: run-user-1000.mount: Deactivated successfully. Feb 23 20:16:30 ip-10-0-136-68 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully. Feb 23 20:16:30 ip-10-0-136-68 systemd[1]: Stopped User Runtime Directory /run/user/1000. Feb 23 20:16:30 ip-10-0-136-68 systemd[1]: Removed slice User Slice of UID 1000. Feb 23 20:16:30 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:30.216954 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:16:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:30.217390636Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:30.217466807Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:16:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:30.223650482Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/f444b1c8-c61b-4911-8d4d-cfd45793c520 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:16:30 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:30.223676944Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:16:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:32.217295 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:16:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:32.217327 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:16:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:32.217818 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:16:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:32.217712741Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:32.217776972Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:16:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:32.223530643Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/5fa5900b-d4eb-4ab3-a882-93c4fc8a3c0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:16:32 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:32.223564963Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:16:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:34.216729 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:16:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:34.217156919Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:34.217237785Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:16:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:34.222837639Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/125c6d8c-816f-44bf-af48-911e1a2df336 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:16:34 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:34.222863840Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:16:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:43.216763 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:16:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:43.217184 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:16:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:16:54.216989 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:16:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:54.217599 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.235592628Z" level=info msg="NetworkStart: stopping network for sandbox 8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.235944741Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/4d20cb9d-668b-485f-9860-8ff444597161 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.235976270Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.235983223Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.235989706Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.247686137Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.247718467Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 systemd[1]: run-utsns-9bad15c4\x2d8fe1\x2d43f9\x2d921e\x2dcc94b1bf4801.mount: Deactivated successfully. Feb 23 20:16:54 ip-10-0-136-68 systemd[1]: run-ipcns-9bad15c4\x2d8fe1\x2d43f9\x2d921e\x2dcc94b1bf4801.mount: Deactivated successfully. Feb 23 20:16:54 ip-10-0-136-68 systemd[1]: run-netns-9bad15c4\x2d8fe1\x2d43f9\x2d921e\x2dcc94b1bf4801.mount: Deactivated successfully. Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.285318587Z" level=info msg="runSandbox: deleting pod ID 7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9 from idIndex" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.285350502Z" level=info msg="runSandbox: removing pod sandbox 7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.285383401Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.285416220Z" level=info msg="runSandbox: unmounting shmPath for sandbox 7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9-userdata-shm.mount: Deactivated successfully. Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.291310894Z" level=info msg="runSandbox: removing pod sandbox from storage: 7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.292866385Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:16:54.292895192Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=4c70f272-bac4-4bad-86b8-515c0a81da70 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:16:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:54.293082 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:16:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:54.293126 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:16:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:54.293152 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:16:54 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:54.293201 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(7675b779d99337ca3176cf39c601289c3bbea90eea30f60a43f118ee2d7dfcd9): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:56.291883 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:56.292109 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:56.292380 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:16:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:16:56.292406 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:17:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:17:06.217537 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:17:06 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:06.218138 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:17:09 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:17:09.216830 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:17:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:09.217233534Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:09.217333050Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:17:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:09.222575352Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/396963b0-9fec-4f29-9bf2-e9cbc30a2c3a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:17:09 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:09.222611331Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:17:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:14.216885 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:14.217349 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:14.217713 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:14 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:14.217747 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:17:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:15.235417354Z" level=info msg="NetworkStart: stopping network for sandbox 85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:15.235571694Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/f444b1c8-c61b-4911-8d4d-cfd45793c520 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:17:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:15.235602771Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:17:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:15.235611853Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:17:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:15.235618709Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:17:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:17.235835463Z" level=info msg="NetworkStart: stopping network for sandbox a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:17.235974776Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/5fa5900b-d4eb-4ab3-a882-93c4fc8a3c0b Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:17:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:17.236012881Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:17:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:17.236026205Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:17:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:17.236037319Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:17:18 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:17:18.217174 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:17:18 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:18.217592 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:17:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:19.234631011Z" level=info msg="NetworkStart: stopping network for sandbox eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:19.234768275Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57 UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/125c6d8c-816f-44bf-af48-911e1a2df336 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:17:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:19.234808735Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:17:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:19.234820151Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:17:19 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:19.234829951Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:26.292502 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:26.292774 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:26.293003 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:26.293044 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:17:32 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:17:32.217235 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:17:32 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:32.217812 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.244935954Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.244985699Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 systemd[1]: run-utsns-4d20cb9d\x2d668b\x2d485f\x2d9860\x2d8ff444597161.mount: Deactivated successfully. Feb 23 20:17:39 ip-10-0-136-68 systemd[1]: run-ipcns-4d20cb9d\x2d668b\x2d485f\x2d9860\x2d8ff444597161.mount: Deactivated successfully. Feb 23 20:17:39 ip-10-0-136-68 systemd[1]: run-netns-4d20cb9d\x2d668b\x2d485f\x2d9860\x2d8ff444597161.mount: Deactivated successfully. Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.280323074Z" level=info msg="runSandbox: deleting pod ID 8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0 from idIndex" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.280365184Z" level=info msg="runSandbox: removing pod sandbox 8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.280412204Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.280432930Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0-userdata-shm.mount: Deactivated successfully. Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.286294937Z" level=info msg="runSandbox: removing pod sandbox from storage: 8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.287838369Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:39.287867761Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=e51da56d-9139-4a2c-a1ac-a2ed931061e7 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:39.288058 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:17:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:39.288110 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:17:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:39.288136 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:17:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:39.288194 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(8d4b1236f81dbbd20c106bf14490c06e092feb5be7feafd8acec36e33187ade0): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:17:43 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:17:43.216408 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:17:43 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:43.216959 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:17:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:17:51.216519 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:51.216960454Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:51.217028860Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:51.222625304Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/732d84d9-24f2-40ad-8d5a-e05582b3aad8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:17:51 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:51.222655137Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:17:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:54.234596550Z" level=info msg="NetworkStart: stopping network for sandbox 8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:17:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:54.234707228Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/396963b0-9fec-4f29-9bf2-e9cbc30a2c3a Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:17:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:54.234735046Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:17:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:54.234743382Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:17:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:17:54.234750542Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:17:56 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:17:56.216984 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:56.217630 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:56.292575 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:56.292814 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:56.293096 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:17:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:17:56.293138 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.245802320Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.245851057Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 systemd[1]: run-utsns-f444b1c8\x2dc61b\x2d4911\x2d8d4d\x2dcfd45793c520.mount: Deactivated successfully. Feb 23 20:18:00 ip-10-0-136-68 systemd[1]: run-ipcns-f444b1c8\x2dc61b\x2d4911\x2d8d4d\x2dcfd45793c520.mount: Deactivated successfully. Feb 23 20:18:00 ip-10-0-136-68 systemd[1]: run-netns-f444b1c8\x2dc61b\x2d4911\x2d8d4d\x2dcfd45793c520.mount: Deactivated successfully. Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.278344828Z" level=info msg="runSandbox: deleting pod ID 85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8 from idIndex" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.278390473Z" level=info msg="runSandbox: removing pod sandbox 85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.278433014Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.278446794Z" level=info msg="runSandbox: unmounting shmPath for sandbox 85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8-userdata-shm.mount: Deactivated successfully. Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.291307610Z" level=info msg="runSandbox: removing pod sandbox from storage: 85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.292907927Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:00.292946635Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=923db51b-4c59-4e52-8334-d0a43eba8e14 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:00.293181 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:18:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:00.293265 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:18:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:00.293294 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:18:00 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:00.293372 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(85418e809b4470d0eb26d50987d7f0df43a64425c0c052272d01faa5638011d8): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.246617648Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.246656371Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 systemd[1]: run-utsns-5fa5900b\x2dd4eb\x2d4ab3\x2da882\x2d93c4fc8a3c0b.mount: Deactivated successfully. Feb 23 20:18:02 ip-10-0-136-68 systemd[1]: run-ipcns-5fa5900b\x2dd4eb\x2d4ab3\x2da882\x2d93c4fc8a3c0b.mount: Deactivated successfully. Feb 23 20:18:02 ip-10-0-136-68 systemd[1]: run-netns-5fa5900b\x2dd4eb\x2d4ab3\x2da882\x2d93c4fc8a3c0b.mount: Deactivated successfully. Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.273338565Z" level=info msg="runSandbox: deleting pod ID a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f from idIndex" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.273385216Z" level=info msg="runSandbox: removing pod sandbox a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.273431866Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.273446537Z" level=info msg="runSandbox: unmounting shmPath for sandbox a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f-userdata-shm.mount: Deactivated successfully. Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.281306786Z" level=info msg="runSandbox: removing pod sandbox from storage: a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.282899123Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:02.282932023Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=40fb5712-7907-4a59-b64d-01d896b8d4cd name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:02.283174 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:18:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:02.283234 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:18:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:02.283313 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:18:02 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:02.283370 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(a59be6d528a72c7161c60db6eb4fc9d1bc3b4617571e55e44f0b9fde0b1e9d6f): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.243958763Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.244010216Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 systemd[1]: run-utsns-125c6d8c\x2d816f\x2d44bf\x2daf48\x2d911e1a2df336.mount: Deactivated successfully. Feb 23 20:18:04 ip-10-0-136-68 systemd[1]: run-ipcns-125c6d8c\x2d816f\x2d44bf\x2daf48\x2d911e1a2df336.mount: Deactivated successfully. Feb 23 20:18:04 ip-10-0-136-68 systemd[1]: run-netns-125c6d8c\x2d816f\x2d44bf\x2daf48\x2d911e1a2df336.mount: Deactivated successfully. Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.272327352Z" level=info msg="runSandbox: deleting pod ID eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57 from idIndex" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.272361643Z" level=info msg="runSandbox: removing pod sandbox eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.272388242Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.272426003Z" level=info msg="runSandbox: unmounting shmPath for sandbox eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57-userdata-shm.mount: Deactivated successfully. Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.280298819Z" level=info msg="runSandbox: removing pod sandbox from storage: eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.281848226Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:04.281878579Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=fc2986ef-c608-4749-9d3f-27329cf41b78 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:04.282043 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:18:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:04.282104 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:18:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:04.282138 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:18:04 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:04.282215 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(eda88298f98ca299917d676347f7d79065e4db2c64d399d5e1ec9710b1c8cc57): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:18:10 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:18:10.216924 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:18:10 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:10.217522 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:18:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:18:15.217011 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:15.217409433Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:15.217474789Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:15.222852850Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2478976d-5709-48c1-8272-6134d6436eba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:18:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:15.222889286Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:18:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:18:17.217411 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:18:17 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:18:17.217435 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:18:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:17.217819980Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:17.217880280Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:18:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:17.217826567Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:17.217931988Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:18:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:17.225154594Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/bd4ee13c-830d-4a38-b934-07719536bad5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:18:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:17.225292072Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:18:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:17.225186270Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/b10fe227-79c9-4ec0-a1cb-9572d2e6d41e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:18:17 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:17.225397652Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:18:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:18:24.217289 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:18:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:24.217776 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:26.292066 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:26.292373 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:26.292573 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:26.292604 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:18:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:27.216938 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:27.217176 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:27.217418 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:27 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:27.217465 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:18:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:36.236395578Z" level=info msg="NetworkStart: stopping network for sandbox 84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:36.236501905Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/732d84d9-24f2-40ad-8d5a-e05582b3aad8 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:18:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:36.236528142Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:18:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:36.236534929Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:18:36 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:36.236541041Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:18:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:18:38.216815 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:18:38 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:38.217198 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.243822186Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.243876626Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 systemd[1]: run-utsns-396963b0\x2d9fec\x2d4f29\x2d9bf2\x2de9cbc30a2c3a.mount: Deactivated successfully. Feb 23 20:18:39 ip-10-0-136-68 systemd[1]: run-ipcns-396963b0\x2d9fec\x2d4f29\x2d9bf2\x2de9cbc30a2c3a.mount: Deactivated successfully. Feb 23 20:18:39 ip-10-0-136-68 systemd[1]: run-netns-396963b0\x2d9fec\x2d4f29\x2d9bf2\x2de9cbc30a2c3a.mount: Deactivated successfully. Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.261319027Z" level=info msg="runSandbox: deleting pod ID 8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e from idIndex" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.261357066Z" level=info msg="runSandbox: removing pod sandbox 8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.261391434Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.261417457Z" level=info msg="runSandbox: unmounting shmPath for sandbox 8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e-userdata-shm.mount: Deactivated successfully. Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.267297406Z" level=info msg="runSandbox: removing pod sandbox from storage: 8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.268826016Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:39.268859102Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=dc863520-e6a2-4370-b6f5-a49640b95e0c name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:39.269059 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:18:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:39.269108 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:18:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:39.269131 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:18:39 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:39.269187 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(8b231c14fdc48639a47c7d652b8b743630ba81a4278df66b1e5716bd9dfced5e): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:18:51 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:18:51.216632 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:18:51 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:51.217068 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=csi-driver pod=aws-ebs-csi-driver-node-ncxb7_openshift-cluster-csi-drivers(0976617f-18ed-4a73-a7d8-ac54cf69ab93)\"" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 Feb 23 20:18:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:18:54.217140 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:18:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:54.217603734Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:18:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:54.217679122Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:18:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:54.223787466Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/8c8aa64e-aadd-4cd0-867e-215aa0ff3099 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:18:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:18:54.223826708Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:56.292128 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:56.292464 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:56.292699 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:18:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:18:56.292743 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:00.234805896Z" level=info msg="NetworkStart: stopping network for sandbox 1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:00.234946090Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5 UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/2478976d-5709-48c1-8272-6134d6436eba Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:00.234987145Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:00.234999395Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:19:00 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:00.235010254Z" level=info msg="Deleting pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:19:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:02.216430 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.217235226Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=bb057c22-f779-4665-a97b-f2b9c3ab9488 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.217481818Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=bb057c22-f779-4665-a97b-f2b9c3ab9488 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.218181173Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=e472c0ed-422d-481c-8d3a-37551a2cdad7 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.218380044Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=e472c0ed-422d-481c-8d3a-37551a2cdad7 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.219133872Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d81fe078-2901-4a64-b7f1-02b5021f820b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.219228082Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.238282569Z" level=info msg="NetworkStart: stopping network for sandbox 75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.238406182Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58 UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/bd4ee13c-830d-4a38-b934-07719536bad5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.238445417Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.238457143Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.238468196Z" level=info msg="Deleting pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.239851020Z" level=info msg="NetworkStart: stopping network for sandbox dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.240026209Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/b10fe227-79c9-4ec0-a1cb-9572d2e6d41e Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.240112073Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.240169302Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.240219037Z" level=info msg="Deleting pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:19:02 ip-10-0-136-68 systemd[1]: Started crio-conmon-d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f.scope. Feb 23 20:19:02 ip-10-0-136-68 systemd[1]: Started libcontainer container d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f. Feb 23 20:19:02 ip-10-0-136-68 conmon[23055]: conmon d9cd908dd8934f954e22 : Failed to write to cgroup.event_control Operation not supported Feb 23 20:19:02 ip-10-0-136-68 systemd[1]: crio-conmon-d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f.scope: Deactivated successfully. Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.356627032Z" level=info msg="Created container d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=d81fe078-2901-4a64-b7f1-02b5021f820b name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.357192419Z" level=info msg="Starting container: d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f" id=c06bec65-c69a-43fa-81de-d92546a30649 name=/runtime.v1.RuntimeService/StartContainer Feb 23 20:19:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:02.377030439Z" level=info msg="Started container" PID=23066 containerID=d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=c06bec65-c69a-43fa-81de-d92546a30649 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 20:19:02 ip-10-0-136-68 systemd[1]: crio-d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f.scope: Deactivated successfully. Feb 23 20:19:06 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:06.552122541Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=682bebdc-20eb-46a7-a999-1391a7f3a520 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:19:06 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:06.553074 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f} Feb 23 20:19:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:14.872445 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:19:14 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:14.872502 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.245622851Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df): error removing pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.245666764Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 systemd[1]: run-utsns-732d84d9\x2d24f2\x2d40ad\x2d8d5a\x2de05582b3aad8.mount: Deactivated successfully. Feb 23 20:19:21 ip-10-0-136-68 systemd[1]: run-ipcns-732d84d9\x2d24f2\x2d40ad\x2d8d5a\x2de05582b3aad8.mount: Deactivated successfully. Feb 23 20:19:21 ip-10-0-136-68 systemd[1]: run-netns-732d84d9\x2d24f2\x2d40ad\x2d8d5a\x2de05582b3aad8.mount: Deactivated successfully. Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.279346663Z" level=info msg="runSandbox: deleting pod ID 84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df from idIndex" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.279397303Z" level=info msg="runSandbox: removing pod sandbox 84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.279443818Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.279458923Z" level=info msg="runSandbox: unmounting shmPath for sandbox 84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df-userdata-shm.mount: Deactivated successfully. Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.296322858Z" level=info msg="runSandbox: removing pod sandbox from storage: 84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.298365696Z" level=info msg="runSandbox: releasing container name: k8s_POD_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:21.298400097Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0" id=8bb4b804-4663-4a87-8ce0-34eb91611d16 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:21.298637 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:19:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:21.298815 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:19:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:21.298841 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:19:21 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:21.298903 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-canary-pjjrk_openshift-ingress-canary(e0abac93-3e79-4a32-8375-5ef1a2e59687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-pjjrk_openshift-ingress-canary_e0abac93-3e79-4a32-8375-5ef1a2e59687_0(84f6c813e46ace378c321f7d29375becf0edad793299701d7779c687714835df): error adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-ingress-canary/ingress-canary-pjjrk/e0abac93-3e79-4a32-8375-5ef1a2e59687]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-ingress-canary/ingress-canary-pjjrk" podUID=e0abac93-3e79-4a32-8375-5ef1a2e59687 Feb 23 20:19:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:24.872732 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:19:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:24.872790 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:26.292103 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:26.292389 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:26.292647 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:26.292691 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:19:33 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:33.216418 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pjjrk" Feb 23 20:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:33.216797376Z" level=info msg="Running pod sandbox: openshift-ingress-canary/ingress-canary-pjjrk/POD" id=8ff9d56b-d0a4-442b-b56c-8508abc79af0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:33.216862845Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:33.222548931Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:faeb972fd4965a6aa5e4609610b3cb5ee20f746ee4724607b2313240c4d5a5a1 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3fc7adab-e9e4-4031-9452-6f1334733ebe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:19:33 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:33.222585014Z" level=info msg="Adding pod openshift-ingress-canary_ingress-canary-pjjrk to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:19:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:34.872027 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:19:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:34.872091 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:19:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:39.236053336Z" level=info msg="NetworkStart: stopping network for sandbox 3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:39.236190015Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/8c8aa64e-aadd-4cd0-867e-215aa0ff3099 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:19:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:39.236230512Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:19:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:39.236270840Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:19:39 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:39.236282396Z" level=info msg="Deleting pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:19:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:44.872637 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:19:44 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:44.872695 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.245155903Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5): error removing pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.245207762Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 systemd[1]: run-utsns-2478976d\x2d5709\x2d48c1\x2d8272\x2d6134d6436eba.mount: Deactivated successfully. Feb 23 20:19:45 ip-10-0-136-68 systemd[1]: run-ipcns-2478976d\x2d5709\x2d48c1\x2d8272\x2d6134d6436eba.mount: Deactivated successfully. Feb 23 20:19:45 ip-10-0-136-68 systemd[1]: run-netns-2478976d\x2d5709\x2d48c1\x2d8272\x2d6134d6436eba.mount: Deactivated successfully. Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.266360771Z" level=info msg="runSandbox: deleting pod ID 1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5 from idIndex" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.266410101Z" level=info msg="runSandbox: removing pod sandbox 1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.266445593Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.266458013Z" level=info msg="runSandbox: unmounting shmPath for sandbox 1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5-userdata-shm.mount: Deactivated successfully. Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.275314899Z" level=info msg="runSandbox: removing pod sandbox from storage: 1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.277002289Z" level=info msg="runSandbox: releasing container name: k8s_POD_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:45.277034179Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0" id=39b3f159-7c4b-455d-a40d-c39de812ac93 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:45.277289 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:19:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:45.277354 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:19:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:45.277377 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:19:45 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:45.277434 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers(46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_shared-resource-csi-driver-node-vf69j_openshift-cluster-csi-drivers_46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77_0(1672406e8a87e0c627a924f9dfa6482319e05598c1f596a2d8845e3bfdf95cd5): error adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" podUID=46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.250753689Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58): error removing pod openshift-dns_dns-default-657v4 from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.250811369Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.252176388Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec): error removing pod openshift-multus_network-metrics-daemon-bs7jz from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.252219693Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 systemd[1]: run-utsns-bd4ee13c\x2d830d\x2d4a38\x2db934\x2d07719536bad5.mount: Deactivated successfully. Feb 23 20:19:47 ip-10-0-136-68 systemd[1]: run-utsns-b10fe227\x2d79c9\x2d4ec0\x2da1cb\x2d9572d2e6d41e.mount: Deactivated successfully. Feb 23 20:19:47 ip-10-0-136-68 systemd[1]: run-ipcns-bd4ee13c\x2d830d\x2d4a38\x2db934\x2d07719536bad5.mount: Deactivated successfully. Feb 23 20:19:47 ip-10-0-136-68 systemd[1]: run-ipcns-b10fe227\x2d79c9\x2d4ec0\x2da1cb\x2d9572d2e6d41e.mount: Deactivated successfully. Feb 23 20:19:47 ip-10-0-136-68 systemd[1]: run-netns-bd4ee13c\x2d830d\x2d4a38\x2db934\x2d07719536bad5.mount: Deactivated successfully. Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.274327533Z" level=info msg="runSandbox: deleting pod ID 75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58 from idIndex" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.274369051Z" level=info msg="runSandbox: removing pod sandbox 75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.274424742Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.274448310Z" level=info msg="runSandbox: unmounting shmPath for sandbox 75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.277305012Z" level=info msg="runSandbox: deleting pod ID dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec from idIndex" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.277329937Z" level=info msg="runSandbox: removing pod sandbox dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.277359011Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.277378335Z" level=info msg="runSandbox: unmounting shmPath for sandbox dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.280302299Z" level=info msg="runSandbox: removing pod sandbox from storage: 75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.281903192Z" level=info msg="runSandbox: releasing container name: k8s_POD_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.281934739Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0" id=8164064e-b8f1-4d40-a4e1-e976053f7936 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:47.282137 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:19:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:47.282195 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:19:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:47.282220 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58): error adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-dns/dns-default-657v4" Feb 23 20:19:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:47.282318 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-default-657v4_openshift-dns(757b7544-c265-49ce-a1f0-22cca4bf919f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-657v4_openshift-dns_757b7544-c265-49ce-a1f0-22cca4bf919f_0(75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58): error adding pod openshift-dns_dns-default-657v4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-dns/dns-default-657v4/757b7544-c265-49ce-a1f0-22cca4bf919f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-dns/dns-default-657v4" podUID=757b7544-c265-49ce-a1f0-22cca4bf919f Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.284308495Z" level=info msg="runSandbox: removing pod sandbox from storage: dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.285781965Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:47.285809630Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0" id=b4f0f65a-efec-4d2a-a11b-e4754652433b name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:47.285989 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:19:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:47.286035 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:19:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:47.286066 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:19:47 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:47.286115 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-bs7jz_openshift-multus(93f0c5c3-9f22-4b93-a925-f621ed5e18e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-bs7jz_openshift-multus_93f0c5c3-9f22-4b93-a925-f621ed5e18e7_0(dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec): error adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-multus/network-metrics-daemon-bs7jz/93f0c5c3-9f22-4b93-a925-f621ed5e18e7]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-multus/network-metrics-daemon-bs7jz" podUID=93f0c5c3-9f22-4b93-a925-f621ed5e18e7 Feb 23 20:19:48 ip-10-0-136-68 systemd[1]: run-netns-b10fe227\x2d79c9\x2d4ec0\x2da1cb\x2d9572d2e6d41e.mount: Deactivated successfully. Feb 23 20:19:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-75056995b9853ee387eabad878a2ad693df1ef4cbbaef3621cc370a22d1bbc58-userdata-shm.mount: Deactivated successfully. Feb 23 20:19:48 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-dd6737ee05af643d4f048e18dfd4d1522abb9cdb3f4c2d8576d20f08f0a219ec-userdata-shm.mount: Deactivated successfully. Feb 23 20:19:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:49.217578 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:49.217880 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:49.218135 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:49 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:49.218173 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:19:49 ip-10-0-136-68 sshd[23164]: main: sshd: ssh-rsa algorithm is disabled Feb 23 20:19:50 ip-10-0-136-68 sshd[23164]: Accepted publickey for core from 10.0.182.221 port 45116 ssh2: RSA SHA256:Ez+JFROVIkSQ/eAziisgy16VY49IFSr8A84gQk7WcPc Feb 23 20:19:50 ip-10-0-136-68 systemd-logind[985]: New session 7 of user core. Feb 23 20:19:50 ip-10-0-136-68 systemd[1]: Created slice User Slice of UID 1000. Feb 23 20:19:50 ip-10-0-136-68 systemd[1]: Starting User Runtime Directory /run/user/1000... Feb 23 20:19:50 ip-10-0-136-68 systemd[1]: Finished User Runtime Directory /run/user/1000. Feb 23 20:19:50 ip-10-0-136-68 systemd[1]: Starting User Manager for UID 1000... Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: pam_unix(systemd-user:session): session opened for user core(uid=1000) by (uid=0) Feb 23 20:19:50 ip-10-0-136-68 systemd[23176]: /usr/lib/systemd/user-generators/podman-user-generator failed with exit status 1. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Queued start job for default target Main User Target. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Created slice User Application Slice. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Started Daily Cleanup of User's Temporary Directories. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Reached target Paths. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Reached target Timers. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Starting D-Bus User Message Bus Socket... Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Starting Create User's Volatile Files and Directories... Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Listening on D-Bus User Message Bus Socket. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Finished Create User's Volatile Files and Directories. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Reached target Sockets. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Reached target Basic System. Feb 23 20:19:50 ip-10-0-136-68 systemd[1]: Started User Manager for UID 1000. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Reached target Main User Target. Feb 23 20:19:50 ip-10-0-136-68 systemd[23170]: Startup finished in 100ms. Feb 23 20:19:50 ip-10-0-136-68 systemd[1]: Started Session 7 of User core. Feb 23 20:19:50 ip-10-0-136-68 sshd[23164]: pam_unix(sshd:session): session opened for user core(uid=1000) by (uid=0) Feb 23 20:19:50 ip-10-0-136-68 sudo[23189]: core : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/bash Feb 23 20:19:50 ip-10-0-136-68 sudo[23189]: pam_unix(sudo-i:session): session opened for user root(uid=0) by core(uid=1000) Feb 23 20:19:50 ip-10-0-136-68 systemd[1]: Starting Hostname Service... Feb 23 20:19:51 ip-10-0-136-68 systemd[1]: Started Hostname Service. Feb 23 20:19:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:54.872812 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:19:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:54.873104 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:19:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:54.873133 2199 kubelet.go:2323] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" Feb 23 20:19:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:54.873682 2199 kuberuntime_manager.go:659] "Message for Container of pod" containerName="csi-driver" containerStatusID={Type:cri-o ID:d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f} pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" containerMessage="Container csi-driver failed liveness probe, will be restarted" Feb 23 20:19:54 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:54.873859 2199 kuberuntime_container.go:709] "Killing container with a grace period" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" containerID="cri-o://d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f" gracePeriod=30 Feb 23 20:19:54 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:54.874305433Z" level=info msg="Stopping container: d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f (timeout: 30s)" id=b328bc6f-31a4-4b07-8f54-a9ac28c1707b name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:56.292567 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:56.292825 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:56.293121 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:19:56 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:19:56.293162 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:19:57 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:19:57.216849 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j" Feb 23 20:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:57.217354366Z" level=info msg="Running pod sandbox: openshift-cluster-csi-drivers/shared-resource-csi-driver-node-vf69j/POD" id=f74616a7-140f-471e-981e-8e54716dcfd6 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:57.217419637Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:57.224179388Z" level=info msg="Got pod network &{Name:shared-resource-csi-driver-node-vf69j Namespace:openshift-cluster-csi-drivers ID:11763cfde1e7ec0faa2f74227afe2fbd2661d14bb33c4cbc622eeb9c45b0d56c UID:46cf33e4-fc3b-4f7a-b0ab-dc2cbc5a5e77 NetNS:/var/run/netns/c54e93cf-c9a4-48cf-9e1d-8166cc073741 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:19:57 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:57.224217702Z" level=info msg="Adding pod openshift-cluster-csi-drivers_shared-resource-csi-driver-node-vf69j to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:19:58 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:19:58.636029181Z" level=warning msg="Failed to find container exit file for d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f: timed out waiting for the condition" id=b328bc6f-31a4-4b07-8f54-a9ac28c1707b name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:19:58 ip-10-0-136-68 systemd[1]: var-lib-containers-storage-overlay-037e5a1658d3a63cff19f5498cb9da67c7869ce5b73fb004329b263fe6b9b0bb-merged.mount: Deactivated successfully. Feb 23 20:20:01 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:01.216794 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-657v4" Feb 23 20:20:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:01.217218381Z" level=info msg="Running pod sandbox: openshift-dns/dns-default-657v4/POD" id=05962522-167e-41f7-ae50-9ca756df7ef5 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:01.217292631Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:20:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:01.223155210Z" level=info msg="Got pod network &{Name:dns-default-657v4 Namespace:openshift-dns ID:bba0bd8b609f7d80871c98616a886fa89b1a8081c543a555f486ce5aff1cc7bd UID:757b7544-c265-49ce-a1f0-22cca4bf919f NetNS:/var/run/netns/db2403c6-469c-4ad2-bd56-a8004d789fed Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:20:01 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:01.223182967Z" level=info msg="Adding pod openshift-dns_dns-default-657v4 to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:20:02 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:02.216476 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bs7jz" Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.216925349Z" level=info msg="Running pod sandbox: openshift-multus/network-metrics-daemon-bs7jz/POD" id=b060f184-ba08-4507-84c4-fbd9030ef1f0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.216997245Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.222680162Z" level=info msg="Got pod network &{Name:network-metrics-daemon-bs7jz Namespace:openshift-multus ID:ceb11b3cb3028139f00e7de6199e4008c980dc09718c6a96ca1fb86dac2750ef UID:93f0c5c3-9f22-4b93-a925-f621ed5e18e7 NetNS:/var/run/netns/281280d8-1351-4796-a301-c4dfa94015e7 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.222705914Z" level=info msg="Adding pod openshift-multus_network-metrics-daemon-bs7jz to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.465045281Z" level=warning msg="Failed to find container exit file for d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f: timed out waiting for the condition" id=b328bc6f-31a4-4b07-8f54-a9ac28c1707b name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.467768338Z" level=info msg="Stopped container d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b328bc6f-31a4-4b07-8f54-a9ac28c1707b name=/runtime.v1.RuntimeService/StopContainer Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.468534140Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=6489f4f5-2bab-4b88-aac6-812c59275505 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.468713436Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=6489f4f5-2bab-4b88-aac6-812c59275505 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.469305556Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605" id=edff6583-8c28-4d57-8a12-120f2ba68dec name=/runtime.v1.ImageService/ImageStatus Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.469470032Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2c83e1c71d89e95a641a344e0cb056f1352671bc31561f70bebbcd3c9f9df04f,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:9601a03ee4d08cb5e891109526a5c482e52ebb422b1d1104e15d325310042605],Size_:437121946,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=edff6583-8c28-4d57-8a12-120f2ba68dec name=/runtime.v1.ImageService/ImageStatus Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.470093244Z" level=info msg="Creating container: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=9a73bc6a-3452-4a21-8dc5-45a50b37b47e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.470190632Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:20:02 ip-10-0-136-68 systemd[1]: Started crio-conmon-3ab11d7f00d6b0dab587a734bf3bb3f4c0bc792f1c41f68b6e11d4441396703f.scope. Feb 23 20:20:02 ip-10-0-136-68 systemd[1]: Started libcontainer container 3ab11d7f00d6b0dab587a734bf3bb3f4c0bc792f1c41f68b6e11d4441396703f. Feb 23 20:20:02 ip-10-0-136-68 conmon[23291]: conmon 3ab11d7f00d6b0dab587 : Failed to write to cgroup.event_control Operation not supported Feb 23 20:20:02 ip-10-0-136-68 systemd[1]: crio-conmon-3ab11d7f00d6b0dab587a734bf3bb3f4c0bc792f1c41f68b6e11d4441396703f.scope: Deactivated successfully. Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.601397869Z" level=info msg="Created container 3ab11d7f00d6b0dab587a734bf3bb3f4c0bc792f1c41f68b6e11d4441396703f: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=9a73bc6a-3452-4a21-8dc5-45a50b37b47e name=/runtime.v1.RuntimeService/CreateContainer Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.601849046Z" level=info msg="Starting container: 3ab11d7f00d6b0dab587a734bf3bb3f4c0bc792f1c41f68b6e11d4441396703f" id=4c7fad40-8876-477c-a53a-633a6a0b5924 name=/runtime.v1.RuntimeService/StartContainer Feb 23 20:20:02 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:02.608916100Z" level=info msg="Started container" PID=23303 containerID=3ab11d7f00d6b0dab587a734bf3bb3f4c0bc792f1c41f68b6e11d4441396703f description=openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver id=4c7fad40-8876-477c-a53a-633a6a0b5924 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4ac357af36e7dd4792f93249425e93fd84aed45840e9fbb3d0ae6c0af964798 Feb 23 20:20:02 ip-10-0-136-68 systemd[1]: crio-3ab11d7f00d6b0dab587a734bf3bb3f4c0bc792f1c41f68b6e11d4441396703f.scope: Deactivated successfully. Feb 23 20:20:03 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:03.384303343Z" level=warning msg="Failed to find container exit file for d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f: timed out waiting for the condition" id=2790fc12-8d4e-4195-87ed-83e496473a38 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:20:07 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:07.134070657Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=96ce8509-daa9-4c8f-b7a0-8b392bf6e750 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:20:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:07.134959 2199 generic.go:332] "Generic (PLEG): container finished" podID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerID="d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f" exitCode=-1 Feb 23 20:20:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:07.134998 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerDied Data:d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f} Feb 23 20:20:07 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:07.135032 2199 scope.go:115] "RemoveContainer" containerID="4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" Feb 23 20:20:10 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:10.896102539Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=339b2101-e97a-4ddc-926a-605f1d484fc7 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:20:11 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:11.898998706Z" level=warning msg="Failed to find container exit file for d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f: timed out waiting for the condition" id=aca5075f-651e-4eeb-b5fd-82f31bb78be0 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:20:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:14.705991141Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=7375175b-c3b8-4b21-84b3-fa2104ffdb4b name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:20:14 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:14.706517111Z" level=info msg="Removing container: 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2" id=b2b2d00b-37a4-4cae-b3a2-0000317b5f6f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:20:15 ip-10-0-136-68 NetworkManager[1177]: [1677183615.0015] dhcp4 (br-ex): state changed new lease, address=10.0.136.68 Feb 23 20:20:15 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:15.637046831Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=dfc29a91-75b7-416a-9d23-76818d0f7fa2 name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:20:15 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:15.638200 2199 kubelet.go:2251] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" event=&{ID:0976617f-18ed-4a73-a7d8-ac54cf69ab93 Type:ContainerStarted Data:3ab11d7f00d6b0dab587a734bf3bb3f4c0bc792f1c41f68b6e11d4441396703f} Feb 23 20:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:18.234797083Z" level=info msg="NetworkStart: stopping network for sandbox faeb972fd4965a6aa5e4609610b3cb5ee20f746ee4724607b2313240c4d5a5a1" id=8ff9d56b-d0a4-442b-b56c-8508abc79af0 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:18.234934471Z" level=info msg="Got pod network &{Name:ingress-canary-pjjrk Namespace:openshift-ingress-canary ID:faeb972fd4965a6aa5e4609610b3cb5ee20f746ee4724607b2313240c4d5a5a1 UID:e0abac93-3e79-4a32-8375-5ef1a2e59687 NetNS:/var/run/netns/3fc7adab-e9e4-4031-9452-6f1334733ebe Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:18.234964018Z" level=error msg="error loading cached network config: network \"multus-cni-network\" not found in CNI cache" Feb 23 20:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:18.234974695Z" level=warning msg="falling back to loading from existing plugins on disk" Feb 23 20:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:18.234981698Z" level=info msg="Deleting pod openshift-ingress-canary_ingress-canary-pjjrk from CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:18.454090272Z" level=warning msg="Failed to find container exit file for 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: timed out waiting for the condition" id=b2b2d00b-37a4-4cae-b3a2-0000317b5f6f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:20:18 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:18.478090301Z" level=info msg="Removed container 4a114f57874110ff99522b285ac21126d029601047e97619a841459357e0faf2: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7/csi-driver" id=b2b2d00b-37a4-4cae-b3a2-0000317b5f6f name=/runtime.v1.RuntimeService/RemoveContainer Feb 23 20:20:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:20.267310819Z" level=info msg="Checking image status: registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3" id=17a1f22e-9d00-4ab0-9096-d3e4260360f2 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:20:20 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:20.267495112Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:51cb7555d0a960fc0daae74a5b5c47c19fd1ae7bd31e2616ebc88f696f511e4d,RepoTags:[],RepoDigests:[registry.build05.ci.openshift.org/ci-ln-268jbvb/stable@sha256:5999db137ca172f2f3aa3c23434daedc3773b48ad4d536bd2c7afa29bed571c3],Size_:351828271,Uid:&Int64Value{Value:0,},Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},},Pinned:false,},Info:map[string]string{},}" id=17a1f22e-9d00-4ab0-9096-d3e4260360f2 name=/runtime.v1.ImageService/ImageStatus Feb 23 20:20:21 ip-10-0-136-68 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 23 20:20:22 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:22.394417291Z" level=warning msg="Failed to find container exit file for d9cd908dd8934f954e22a02aff32f37ce265a02474cbde364f6e5a05e46e8a5f: timed out waiting for the condition" id=a69c7839-e6b0-4741-8173-37ecd9a54b1f name=/runtime.v1.RuntimeService/ContainerStatus Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.245612198Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae): error removing pod openshift-network-diagnostics_network-check-target-52ltr from CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (delete): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: PollImmediate error waiting for ReadinessIndicatorFile (on del): timed out waiting for the condition" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.245656648Z" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 systemd[1]: run-utsns-8c8aa64e\x2daadd\x2d4cd0\x2d867e\x2d215aa0ff3099.mount: Deactivated successfully. Feb 23 20:20:24 ip-10-0-136-68 systemd[1]: run-ipcns-8c8aa64e\x2daadd\x2d4cd0\x2d867e\x2d215aa0ff3099.mount: Deactivated successfully. Feb 23 20:20:24 ip-10-0-136-68 systemd[1]: run-netns-8c8aa64e\x2daadd\x2d4cd0\x2d867e\x2d215aa0ff3099.mount: Deactivated successfully. Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.262364466Z" level=info msg="runSandbox: deleting pod ID 3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae from idIndex" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.262409121Z" level=info msg="runSandbox: removing pod sandbox 3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.262436572Z" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.262451985Z" level=info msg="runSandbox: unmounting shmPath for sandbox 3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 systemd[1]: run-containers-storage-overlay\x2dcontainers-3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae-userdata-shm.mount: Deactivated successfully. Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.267332570Z" level=info msg="runSandbox: removing pod sandbox from storage: 3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.268921675Z" level=info msg="runSandbox: releasing container name: k8s_POD_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:24.268952261Z" level=info msg="runSandbox: releasing pod sandbox name: k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0" id=87ab1baa-fe25-4f4a-aad1-1e182abaf0f8 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:20:24.269165 2199 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" Feb 23 20:20:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:20:24.269376 2199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:20:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:20:24.269401 2199 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:20:24 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:20:24.269461 2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-52ltr_openshift-network-diagnostics(adcfa5f5-1c6b-415e-8e69-b72e137820e1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-52ltr_openshift-network-diagnostics_adcfa5f5-1c6b-415e-8e69-b72e137820e1_0(3f8d4816e1ffc9d5b5446499d08c2649aeb0e0d57d0aa7f9476a80766dad3cae): error adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): Multus: [openshift-network-diagnostics/network-check-target-52ltr/adcfa5f5-1c6b-415e-8e69-b72e137820e1]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\"" pod="openshift-network-diagnostics/network-check-target-52ltr" podUID=adcfa5f5-1c6b-415e-8e69-b72e137820e1 Feb 23 20:20:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:24.872849 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:20:24 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:24.872908 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:20:26.291955 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:20:26.292272 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:20:26.292541 2199 remote_runtime.go:479] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" containerID="833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91" cmd=[test -f /etc/cni/net.d/10-ovn-kubernetes.conf] Feb 23 20:20:26 ip-10-0-136-68 kubenswrapper[2199]: E0223 20:20:26.292571 2199 prober.go:106] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 833755728b8e8f368af67e23a0e0cf959234fc35219f65ef26e4299ea7c44a91 is running failed: open /proc/3457/stat: no such file or directory: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gzbrl" podUID=7da00340-9715-48ac-b144-4705de276bf5 containerName="ovnkube-node" Feb 23 20:20:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:34.872388 2199 patch_prober.go:28] interesting pod/aws-ebs-csi-driver-node-ncxb7 container/csi-driver namespace/openshift-cluster-csi-drivers: Liveness probe status=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" start-of-body= Feb 23 20:20:34 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:34.872443 2199 prober.go:109] "Probe failed" probeType="Liveness" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ncxb7" podUID=0976617f-18ed-4a73-a7d8-ac54cf69ab93 containerName="csi-driver" probeResult=failure output="Get \"http://10.0.136.68:10300/healthz\": dial tcp 10.0.136.68:10300: connect: connection refused" Feb 23 20:20:38 ip-10-0-136-68 kubenswrapper[2199]: I0223 20:20:38.217047 2199 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-52ltr" Feb 23 20:20:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:38.218012074Z" level=info msg="Running pod sandbox: openshift-network-diagnostics/network-check-target-52ltr/POD" id=ee6e73d0-480e-44cf-9f47-cdc6c8de8de1 name=/runtime.v1.RuntimeService/RunPodSandbox Feb 23 20:20:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:38.218084519Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]" Feb 23 20:20:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:38.225079430Z" level=info msg="Got pod network &{Name:network-check-target-52ltr Namespace:openshift-network-diagnostics ID:1dbe1fcc123531d4a7a64050f5a877d091e95ffb9ecc29d7c7efed772882634d UID:adcfa5f5-1c6b-415e-8e69-b72e137820e1 NetNS:/var/run/netns/48db0324-36b9-4d3a-a31a-500e494e3c15 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" Feb 23 20:20:38 ip-10-0-136-68 crio[2158]: time="2023-02-23 20:20:38.225115776Z" level=info msg="Adding pod openshift-network-diagnostics_network-check-target-52ltr to CNI network \"multus-cni-network\" (type=multus)" Feb 23 20:20:41 ip-10-0-136-68 sudo[23458]: root : TTY=pts/0 ; PWD=/var/roothome ; USER=root ; COMMAND=/bin/journalctl --no-pager Feb 23 20:20:41 ip-10-0-136-68 sudo[23458]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0)